Читать онлайн книгу "The People’s Platform: Taking Back Power and Culture in the Digital Age"

The People’s Platform: Taking Back Power and Culture in the Digital Age
Astra Taylor


The internet has been hailed as an unprecedented democratising force, a place where everyone can participate. But how true is this? Dismantling the techno-utopian vision, �The People’s Platform’ argues that for all our “tweeting” and “sharing,” the internet in fact reflects and amplifies real-world inequalities as much as it reduces them. Online, just as off-line, attention accrues to those who already have plenty of it.What we have seen so far, Astra Taylor argues, has been not a revolution but a rearrangement. A handful of giants like Amazon, Apple, Google and Facebook are our gatekeepers. And the worst habits of the old media model – the pressure to seek easy celebrity – have proliferated. When culture is “free,” creative work has diminishing value and advertising fuels the system.We can do better, Taylor insists. The online world does offer a unique opportunity, but a democratic culture that supports the diverse and lasting will not spring up from technology alone. If we want the internet to be a people’s platform, we will have to make it so.





















CONTENTS

Cover (#u6d5a140a-d23b-5265-97b3-ff7de2152577)

Title Page (#ue22957ec-329b-594a-b0a5-cf34794aa582)

Preface (#uf6352043-3e6a-5dbf-8678-6d1ea910ec7e)

1. A Peasant’s Kingdom (#uc246103b-2bb8-5503-9033-0c4d9936fef0)

2. For Love or Money (#uc7e0882e-83f6-5ea6-83ea-a43d46fbfe63)

3. What We Want (#ucb4426e0-9efe-50c6-98c5-178f96e818b1)

4. Unequal Uptake (#litres_trial_promo)

5. The Double Anchor (#litres_trial_promo)

6. Drawing a Line (#litres_trial_promo)

Conclusion (#litres_trial_promo)

Notes (#litres_trial_promo)

Index (#litres_trial_promo)

Acknowledgments (#litres_trial_promo)

About the Author (#litres_trial_promo)

Also by Astra Taylor (#litres_trial_promo)

Copyright (#litres_trial_promo)

About the Publisher (#litres_trial_promo)




PREFACE (#ulink_3ac8a099-796d-5d4e-9f82-f59e4f8a6933)


When I was twelve years old, while most of my peers were playing outside, I hunkered down in my family’s den, consumed by the project of making my own magazine. Obsessed with animal rights and environmentalism, I imagined my publication as a homemade corrective to corporate culture, a place where other kids could learn the truth that Saturday morning cartoons, big-budget movies, and advertisements for “Happy Meals” hid from them. I wrangled my friends into writing for it (I know it’s hard to believe I had any), used desktop publishing software to design it, and was thrilled that the father of one of my conspirators managed a local Kinkos, which meant we could make copies at a steep discount. Every couple of months my parents drove me to the handful of bookstores and food co-ops in Athens, Georgia, where I eagerly asked the proprietors if I could give them the latest issue, convinced that when enough young people read my cri de coeur the world would change.

It was a strange way to spend one’s preadolescence. But equally strange, now, is to think of how much work I had to do to get it into readers’ hands once everything was written and edited. That’s how it went back in the early nineties: each precious copy could be accounted for, either given to a friend, handed out on a street corner, shelved at a local store, or mailed to the few dozen precious subscribers I managed to amass. And I, with access to a computer, a printer, and ample professional copiers, had it pretty easy compared to those who had walked a similar road just decades before me: a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.

When I was working on my magazine I had only vague inklings that the Internet even existed. Today any kid with a smartphone and a message has the potential to reach more people with the push of a button than I did during two years of self-publishing. New technologies have opened up previously unimaginable avenues for self-expression and exposure to information, and each passing year has only made it easier to spread the word.

In many respects, my adult work as an independent filmmaker has been motivated by the same concerns as my childhood hobby: frustration with the mainstream media. So many subjects I cared about were being ignored; so many worthwhile stories went uncovered. I picked up a camera to fill in the gap, producing various documentaries focused on social justice and directing two features about philosophy. On the side I’ve written articles and essays for the independent press, covering topics including disability rights and alternative education. When Occupy Wall Street took off in the fall of 2011, I became one of the coeditors of a movement broadsheet called the Occupy! Gazette, five crowd-funded issues in total, which my cohorts and I gave away for free on the Web and in print.

I’m a prime candidate, in other words, for cheering on the revolution that is purportedly being ushered in by the Internet. The digital transformation has been hailed as the great cultural leveler, putting the tools of creation and dissemination in everyone’s hands and wresting control from long-established institutions and actors. Due to its remarkable architecture, the Internet facilitates creativity and communication in unprecedented ways. Each of us is now our own broadcaster; we are no longer passive consumers but active producers. Unlike the one-way, top-down transmission of radio or television and even records and books, we finally have a medium through which everyone’s voice can supposedly be heard.

To all of this I shout an enthusiastic hurrah. Progressives like myself have spent decades decrying mass culture and denouncing big media. Since 1944, when Max Horkheimer and Theodor Adorno published their influential essay “The Culture Industry: Enlightenment as Mass Deception,” critics have sounded the alarm about powerful corporate interests distorting our culture and drowning out democracy in pursuit of profit.

But while heirs to this tradition continue to worry about commercialism and media consolidation, there is now a countervailing tendency to assume that the Internet, by revolutionizing our media system, has rendered such concerns moot. In a digital world, the number of channels is theoretically infinite, and no one can tell anyone what to consume. We are the ultimate deciders, fully in charge of our media destinies, choosing what to look at, actively seeking and clicking instead of having our consumption foisted upon us by a cabal of corporate executives.

As a consequence of the Internet, it is assumed that traditional gatekeepers will crumble and middlemen will wither. The new orthodoxy envisions the Web as a kind of Robin Hood, stealing audience and influence away from the big and giving to the small. Networked technologies will put professionals and amateurs on an even playing field, or even give the latter an advantage. Artists and writers will thrive without institutional backing, able to reach their audiences directly. A golden age of sharing and collaboration will be ushered in, modeled on Wikipedia and open source software.

In many wonderful ways this is the world we have been waiting for. So what’s the catch? In some crucial respects the standard assumptions about the Internet’s inevitable effects have misled us. New technologies have undoubtedly removed barriers to entry, yet, as I will show, cultural democracy remains elusive. While it’s true that anyone with an Internet connection can speak online, that doesn’t mean our megaphones blast our messages at the same volume. Online, some speak louder than others. There are the followed and the followers. As should be obvious to anyone with an e-mail account, the Internet, though open to all, is hardly an egalitarian or noncommercial paradise, even if you bracket all the porn and shopping sites.

To understand why the most idealistic predictions about how the Internet would transform cultural production and distribution, upending the balance of power in the process, have not come to pass, we need to look critically at the current state of our media system. Instead, we celebrate a rosy vision of what our new, networked tools theoretically make possible or the changes they will hypothetically unleash. What’s more, we need to look ahead and recognize the forces that are shaping the development and implementation of technology—economic forces in particular.

Writing critically about technological and cultural transformation means proceeding with caution. Writers often fall into one of two camps, the cheerleaders of progress at any cost and the prophets of doom who condemn change, lamenting all they imagine will be lost. This pattern long precedes us. In 1829, around the time advances in locomotion and telegraphy inspired a generation to speak rapturously of the “annihilation of space and time,” Thomas Carlyle, the Victorian era’s most irascible and esteemed man of letters, published a sweeping indictment of what he called the Mechanical Age.

Everywhere Carlyle saw new contraptions replacing time-honored techniques—there were machines to drive humans to work faster or replace them altogether—and he was indignant: “We war with rude Nature; and, by our resistless engines, come off always victorious, and loaded with spoils.” Yet the spoils of this war, he anxiously observed, were not evenly distributed. While some raced to the top, others ate dust. Wealth had “gathered itself more and more into masses, strangely altering the old relations, and increasing the distance between the rich and the poor.” More worrisome still, mechanism was encroaching on the inner self. “Not the external and physical alone is now managed by machinery, but the internal and spiritual also,” he warned. “Men are grown mechanical in head and in heart, as well as in hand,” a shift he imagined would make us not wiser but worse off.

Two years later, Timothy Walker, a young American with a career in law ahead of him, wrote a vigorous rebuttal entitled “Defense of Mechanical Philosophy.” Where Carlyle feared the mechanical metaphor making society over in its image, Walker welcomed such a shift, dismissing Carlyle as a vaporizing mystic. Mechanism, in Walker’s judgment, has caused no injury, only advantage. Where mountains stood obstructing, mechanism flattened them. Where the ocean divided, mechanism stepped across. “The horse is to be unharnessed, because he is too slow; and the ox is to be unyoked, because he is too weak. Machines are to perform the drudgery of man, while he is to look on in self-complacent ease.” Where, Walker asked, is the wrong in any of this?

Carlyle, Walker observed, feared “that mind will become subjected to the laws of matter; that physical science will be built up on the ruins of our spiritual nature; that in our rage for machinery, we shall ourselves become machines.” On the contrary, Walker argued, machines would free our minds by freeing our bodies from tedious labor, thus permitting all of humankind to become “philosophers, poets, and votaries of art.” That “large numbers” of people had been thrown out of work as a consequence of technological change is but a “temporary inconvenience,” Walker assured his readers—a mere misstep on mechanism’s “triumphant march.”

Today, most pronouncements concerning the impact of technology on our culture, democracy, and work resound with Carlyle’s and Walker’s sentiments, their well-articulated insights worn down into twenty-first-century sound bites. The argument about the impact of the Internet is relentlessly binary, techno-optimists facing off against techno-skeptics. Will the digital transformation liberate humanity or tether us with virtual chains? Do communicative technologies fire our imaginations or dull our senses? Do social media nurture community or intensify our isolation, expand our intellectual faculties or wither our capacity for reflection, make us better citizens or more efficient consumers? Have we become a nation of skimmers, staying in the shallows of incessant stimulation, or are we evolving into expert synthesizers and multitaskers, smarter than ever before? Are those who lose their jobs due to technological change deserving of our sympathy or our scorn (“adapt or die,” as the saying goes)? Is that utopia on the horizon or dystopia around the bend?

These questions are important, but the way they are framed tends to make technology too central, granting agency to tools while sidestepping the thorny issue of the larger social structures in which we and our technologies are embedded. The current obsession with the neurological repercussions of technology—what the Internet is doing to our brains, our supposedly shrinking attention spans, whether video games improve coordination and reflexes, how constant communication may be addictive, whether Google is making us stupid—is a prime example. This focus ignores the business imperatives that accelerate media consumption and the market forces that encourage compulsive online engagement.

Yet there is one point on which the cheerleaders and the naysayers agree: we are living at a time of profound rupture—something utterly unprecedented and incomparable. All connections to the past have been rent asunder by the power of the network, the proliferation of smartphones, tablets, and Google glasses, the rise of big data, and the dawning of digital abundance. Social media and memes will remake reality—for better or for worse. My view, on the other hand, is that there is as much continuity as change in our new world, for good and for ill.

Many of the problems that plagued our media system before the Internet was widely adopted have carried over into the digital domain—consolidation, centralization, and commercialism—and will continue to shape it. Networked technologies do not resolve the contradictions between art and commerce, but rather make commercialism less visible and more pervasive. The Internet does not close the distance between hits and flops, stars and the rest of us, but rather magnifies the gap, eroding the middle space between the very popular and virtually unknown. And there is no guarantee that the lucky few who find success in the winner-take-all economy online are more diverse, authentic, or compelling than those who succeeded under the old system.

Despite the exciting opportunities the Internet offers, we are witnessing not a leveling of the cultural playing field, but a rearrangement, with new winners and losers. In the place of Hollywood moguls, for example, we now have Silicon Valley tycoons (or, more precisely, we have Hollywood moguls and Silicon Valley tycoons). The pressure to be quick, to appeal to the broadest possible public, to be sensational, to seek easy celebrity, to be attractive to corporate sponsors—these forces multiply online where every click can be measured, every piece of data mined, every view marketed against. Originality and depth eat away at profits online, where faster fortunes are made by aggregating work done by others, attracting eyeballs and ad revenue as a result.

Indeed, the advertising industry is flourishing as never before. In a world where creative work holds diminishing value, where culture is “free,” and where fields like journalism are in crisis, advertising dollars provide the unacknowledged lifeblood of the digital economy. Moreover, the constant upgrading of devices, operating systems, and Web sites; the move toward “walled gardens” and cloud computing; the creep of algorithms and automation into every corner of our lives; the trend toward filtering and personalization; the lack of diversity; the privacy violations: all these developments are driven largely by commercial incentives. Corporate power and the quest for profit are as fundamental to new media as old. From a certain angle, the emerging order looks suspiciously like the old one.

In fact, the phrase “new media” is something of a misnomer because it implies that the old media are on their way out, as though at the final stage of some natural, evolutionary process. Contrary to all the talk of dinosaurs, this is more a period of adaptation than extinction. Instead of distinct old and new media, what we have is a complex cultural ecosystem that spans the analog and digital, encompassing physical places and online spaces, material objects and digital copies, fleshy bodies and virtual identities.

In that ecosystem, the online and off-line are not discrete realms, contrary to a perspective that has suffused writing about the Internet since the word “cyberspace” was in vogue.


You might be reading this book off a page or screen—a screen that is part of a gadget made of plastic and metal and silicon, the existence of which puts a wrench into any fantasy of a purely ethereal exchange. All bits eventually butt up against atoms; even information must be carried along by something, by stuff.

I am not trying to deny the transformative nature of the Internet, but rather to recognize that we’ve lived with it long enough to ask tough questions.


Thankfully, this is already beginning to happen. Over the course of writing this book, the public conversation about the Internet and the technology industry has shifted significantly.


There have been revelations about the existence of a sprawling international surveillance infrastructure, uncompetitive business and exploitative labor practices, and shady political lobbying initiatives, all of which have made major technology firms the subjects of increasing scrutiny from academics, commentators, activists, and even government officials in the United States and abroad.




People are beginning to recognize that Silicon Valley platitudes about “changing the world” and maxims like “don’t be evil” are not enough to ensure that some of the biggest corporations on Earth will behave well. The risk, however, is that we will respond to troubling disclosures and other disappointments with cynicism and resignation when what we need is clearheaded and rigorous inquiry into the obstacles that have stalled some of the positive changes the Internet was supposed to usher in.

First and foremost, we need to rethink how power operates in a post-broadcast era. It was easy, under the old-media model, to point the finger at television executives and newspaper editors (and even book publishers) and the way they shaped the cultural and social landscape from on high. In a networked age, things are far more ambiguous, yet new-media thinking, with its radical sheen and easy talk of revolution, ignores these nuances. The state is painted largely as a source of problematic authority, while private enterprise is given a free pass; democracy, fuzzily defined, is attained through “sharing,” “collaboration,” “innovation,” and “disruption.”

In fact, wealth and power are shifting to those who control the platforms on which all of us create, consume, and connect. The companies that provide these and related services are quickly becoming the Disneys of the digital world—monoliths hungry for quarterly profits, answerable to their shareholders not us, their users, and more influential, more ubiquitous, and more insinuated into the fabric of our everyday lives than Mickey Mouse ever was. As such they pose a whole new set of challenges to the health of our culture.

Right now we have very little to guide us as we attempt to think through these predicaments. We are at a loss, in part, because we have wholly adopted the language and vision offered up by Silicon Valley executives and the new-media boosters who promote their interests. They foresee a marketplace of ideas powered by profit-driven companies who will provide us with platforms to creatively express ourselves and on which the most deserving and popular will succeed.

They speak about openness, transparency, and participation, and these terms now define our highest ideals, our conception of what is good and desirable, for the future of media in a networked age. But these ideals are not sufficient if we want to build a more democratic and durable digital culture. Openness, in particular, is not necessarily progressive. While the Internet creates space for many voices, the openness of the Web reflects and even amplifies real-world inequities as often as it ameliorates them.

I’ve tried hard to avoid the Manichean view of technology, which assumes either that the Internet will save us or that it is leading us astray, that it is making us stupid or making us smart, that things are black or white. The truth is subtler: technology alone cannot deliver the cultural transformation we have been waiting for; instead, we need to first understand and then address the underlying social and economic forces that shape it. Only then can we make good on the unprecedented opportunity the Internet offers and begin to make the ideal of a more inclusive and equitable culture a reality. If we want the Internet to truly be a people’s platform, we will have to work to make it so.




1 (#ulink_f6171d25-93c5-53ae-a9cb-922feb7db258)

A PEASANT’S KINGDOM (#ulink_f6171d25-93c5-53ae-a9cb-922feb7db258)


I moved to New York City in 1999 just in time to see the dot-com dream come crashing down. I saw high-profile start-ups empty out their spacious lofts, the once ebullient spaces vacant and echoing; there were pink-slip parties where content providers, designers, and managers gathered for one last night of revelry. Although I barely felt the aftershocks that rippled through the economy when the bubble burst, plenty of others were left thoroughly shaken. In San Francisco the boom’s rising rents pushed out the poor and working class, as well as those who had chosen voluntary poverty by devoting themselves to social service or creative experimentation. Almost overnight, the tech companies disappeared, the office space and luxury condos vacated, jilting the city and its inhabitants despite the irreversible accommodations that had been made on behalf of the start-ups. Some estimate that 450,000 jobs were lost in the Bay Area alone.




As the economist Doug Henwood has pointed out, a kind of amnesia blots out the dot-com era, blurring it like a bad hangover. It seems so long ago: before tragedy struck lower Manhattan, before the wars in Afghanistan and Iraq started, before George W. Bush and then Barack Obama took office, before the economy collapsed a second time. When the rare backward glance is cast, the period is usually dismissed as an anomaly, an embarrassing by-product of irrational exuberance and excess, an aberrational event that gets chalked up to collective folly (the crazy business schemes, the utopian bombast, the stock market fever), but “never as something emerging from the innards of American economic machinery,” to use Henwood’s phrase.




At the time of the boom, however, the prevailing myth was that the machinery had been forever changed. “Technological innovation,” Alan Greenspan marveled, had instigated a new phase of productivity and growth that was “not just a cyclical phenomenon or a statistical aberration, but … a more deep-seated, still developing, shift in our economic landscape.” Everyone would be getting richer, forever. (Income polarization was actually increasing at the time, the already affluent becoming ever more so while wages for most U.S. workers stagnated at levels below 1970s standards.)


The wonders of computing meant skyrocketing productivity, plentiful jobs, and the end of recessions. The combination of the Internet and IPOs (initial public offerings) had flattened hierarchies, computer programming jobs were reconceived as hip, and information was officially more important than matter (bits, boosters liked to say, had triumphed over atoms). A new economy was upon us.

Despite the hype, the new economy was never that novel. With some exceptions, the Internet companies that fueled the late nineties fervor were mostly about taking material from the off-line world and simply posting it online or buying and selling rather ordinary goods, like pet food or diapers, and prompting Internet users to behave like conventional customers. Due to changes in law and growing public enthusiasm for high-risk investing, the amount of money available to venture capital funds ballooned from $12 billion in 1996 to $106 billion in 2000, leading many doomed ideas to be propped up by speculative backing. Massive sums were committed to enterprises that replicated efforts: multiple sites specialized in selling toys or beauty supplies or home improvement products, and most of them flopped. Barring notable anomalies like Amazon and eBay, online shopping failed to meet inflated expectations. The Web was declared a wasteland and investments dried up, but not before many venture capitalists and executives profited handsomely, soaking up underwriting fees from IPOs or exercising their options before stocks went under.


Although the new economy evaporated, the experience set the stage for a second bubble and cemented a relationship between technology and the market that shapes our digital lives to this day.

As business and technology writer Sarah Lacy explains in her breathless account of Silicon Valley’s recent rebirth, Once You’re Lucky, Twice You’re Good, a few discerning entrepreneurs extracted a lesson from the bust that they applied to new endeavors with aplomb after the turn of the millennium: the heart of the Internet experience was not e-commerce but e-mail, that is to say, connecting and communicating with other people as opposed to consuming goods that could easily be bought at a store down the street. Out of that insight rose the new wave of social media companies that would be christened Web 2.0.

The story Lacy tells is a familiar one to those who paid attention back in the day: ambition and acquisitions, entrepreneurs and IPOs. “Winning Is Everything” is the title of one chapter; “Fuck the Sweater-Vests” another. You’d think it was the nineties all over again, except that this time around the protagonists aspired to market valuations in the billions, not millions. Lacy admires the entrepreneurs all the more for their hubris; they are phoenixes, visionaries who emerged unscathed from the inferno, who walked on burning coals to get ahead. After the bust, the dot-coms and venture capitalists were “easy targets,” blamed for being “silly, greedy, wasteful, irrelevant,” Lacy writes. The “jokes and quips” from the “cynics” cut deep, making it that much harder for wannabe Web barons “to build themselves back up again.” But build themselves back up a handful of them did, heading to the one place insulated against the downturn, Silicon Valley. “The Valley was still awash in cash and smart people,” says Lacy. “Everyone was just scared to use them.”

Web 2.0 was the logical consequence of the Internet going mainstream, weaving itself into everyday life and presenting new opportunities as millions of people rushed online. The “human need to connect” is “a far more powerful use of the Web than for something like buying a book online,” Lacy writes, recounting the evolution of companies like Facebook, LinkedIn, Twitter, and the now beleaguered Digg. “That’s why these sites are frequently described as addictive … everyone is addicted to validations and human connections.”

Instead of the old start-up model, which tried to sell us things, the new one trades on our sociability—our likes and desires, our observations and curiosities, our relationships and networks—which is mined, analyzed, and monetized. To put it another way, Web 2.0 is not about users buying products; rather, users are the product. We are what companies like Google and Facebook sell to advertisers. Of course, social media have made a new kind of engagement possible: they have also generated a handful of enormous companies that profit off the creations and interactions of others. What is social networking if not the commercialization of the once unprofitable art of conversation? That, in a nutshell, is Web 2.0: content is no longer king, as the digital sages like to say; connections are.

Though no longer the popular buzzword it once was, “Web 2.0” remains relevant, its key tenets incorporated not just by social networking sites, but in just by all cultural production and distribution, from journalism to film and music. As traditional institutions go under—consider the independent book, record, and video stores that have gone out of business—they are being replaced by a small number of online giants—Amazon, iTunes, Netflix, and so on—that are better positioned to survey and track users. These behemoths “harness collective intelligence,” as the process has been described, to sell people goods and services directly or indirectly. “The key to media in the twenty-first century may be who has the most knowledge of audience behavior, not who produces the most popular content,” Tom Rosenstiel, the director of the Pew Research Center’s Project for Excellence in Journalism, explained.

Understanding what sites people visit, what content they view, what products they buy and even their geographic coordinates will allow advertisers to better target individual consumers. And more of that knowledge will reside with technology companies than with content producers. Google, for instance, will know much more about each user than will the proprietor of any one news site. It can track users’ online behavior through its Droid software on mobile phones, its Google Chrome Web browser, its search engine and its new tablet software. The ability to target users is why Apple wants to control the audience data that goes through the iPad. And the company that may come to know the most about you is Facebook, with which users freely share what they like, where they go and who their friends are.




For those who desire to create art and culture—or “content,” to use that horrible, flattening word—the shift is significant. More and more of the money circulating online is being soaked up by technology companies, with only a trickle making its way to creators or the institutions that directly support them. In 2010 publishers of articles and videos received around twenty cents of each dollar advertisers spent on their sites, down from almost a whole dollar in 2003.


Cultural products are increasingly valuable only insofar as they serve as a kind of “signal generator” from which data can be mined. The real profits flow not to the people who fill the platforms where audiences congregate and communicate—the content creators—but to those who own them.

The original dot-com bubble’s promise was first and foremost about money. Champions of the new economy conceded that the digital tide would inevitably lift some boats higher than others, but they commonly assumed that everyone would get a boost from the virtual effervescence. A lucky minority would work at a company that was acquired or went public and spend the rest of their days relaxing on the beach, but the prevailing image had each individual getting in on the action, even if it was just by trading stocks online.

After the bubble popped, the dream of a collective Internet-enabled payday faded. The new crop of Internet titans never bothered to issue such empty promises to the masses. The secret of Web 2.0 economics, as Lacy emphasizes, is getting people to create content without demanding compensation, whether by contributing code, testing services, or sharing everything from personal photos to restaurant reviews. “A great Web 2.0 site needs a mob of people who use it, love it, and live by it—and convince their friends and family to do the same,” Lacy writes. “Mobs will devote more time to a site they love than to their jobs. They’ll frequently build the site for the founders for free.” These sites exist only because of unpaid labor, the millions of minions toiling to fill the coffers of a fortunate few.

Spelling this out, Lacy is not accusatory but admiring—awestruck, even. When she writes that “social networking, media, and user-generated content sites tap into—and exploit—core human emotions,” it’s with fealty appropriate to a fiefdom. As such, her book inadvertently provides a perfect exposé of the hypocrisy lurking behind so much social media rhetoric. The story she tells, after all, is about nothing so much as fortune seeking, yet the question of compensating those who contribute to popular Web sites, when it arises, is quickly brushed aside. The “mobs” receive something “far greater than money,” Lacy writes, offering up the now-standard rationalization for the inequity: entertainment, self-expression, and validation.


This time around, no one’s claiming the market will be democratized—instead, the promise is that culture will be. We will “create” and “connect” and the entrepreneurs will keep the cash.

This arrangement has been called “digital sharecropping.”


Instead of the production or distribution of culture being concentrated in the hands of the few, it is the economic value of culture that is hoarded. A small group, positioned to capture the value of the network, benefits disproportionately from a collective effort. The owners of social networking sites may be forbidden from selling songs, photos, or reviews posted by individual users, for example, but the companies themselves, including user content, might be turned over for a hefty sum: hundreds of millions for Bebo and Myspace and Goodreads, one billion or more for Instagram and Tumblr. The mammoth archive of videos displayed on YouTube and bought by Google was less a priceless treasure to be preserved than a vehicle for ads. These platforms succeed because of an almost unfathomable economy of scale; each search brings revenue from targeted advertising and fodder for the data miners: each mouse click is a trickle in the flood.

Over the last few years, there has been an intermittent but spirited debate about the ethics of this economic relationship. When Flickr was sold to Yahoo!, popular bloggers asked whether the site should compensate those who provided the most viewed photographs; when the Huffington Post was acquired by AOL for $315 million, many of the thousands of people who had been blogging for free were aghast, and some even started a boycott; when Facebook announced its upcoming IPO, journalists speculated about what the company, ethically, owed its users, the source of its enormous valuation.


The same holds for a multitude of sites: Twitter wouldn’t be worth billions if people didn’t tweet, Yelp would be useless without freely provided reviews, Snapchat nothing without chatters. The people who spend their time sharing videos with friends, rating products, or writing assessments of their recent excursion to the coffee shop—are they the users or the used?

The Internet, it has been noted, is a strange amalgamation of playground and factory, a place where amusement and labor overlap in confusing ways. We may enjoy using social media, while also experiencing them as obligatory; more and more jobs require employees to cultivate an online presence, and social networking sites are often the first place an employer turns when considering a potential hire. Some academics call this phenomenon “playbor,” an awkward coinage that tries to get at the strange way “sexual desire, boredom, friendship” become “fodder for speculative profit” online, to quote media scholar Trebor Scholz.


Others use the term “social factory” to describe the Web 2.0, envisioning it as a machine that subsumes our leisure, transforming lazy clicks into cash. “Participation is the oil of the digital economy,” as Scholz is fond of saying. The more we comment and share, the more we rate and like, the more economic value is accumulated by those who control the platforms on which our interactions take place.




Taking this argument one step further, a frustrated minority have complained that we are living in a world of “digital feudalism,” where sites like Facebook and Tumblr offer up land for content providers to work while platform owners expropriate value with impunity and, if you read the fine print, stake unprecedented claim over users’ creations.


“By turn, we are the heroic commoners feeding revolutions in the Middle East and, at the same time, �modern serfs’ working on Mark Zuckerberg’s and other digital plantations,” Marina Gorbis of the Institute for the Future has written. “We, the armies of digital peasants, scramble for subsistence in digital manor economies, lucky to receive scraps of ad dollars here and there, but mostly getting by, sometimes happily, on social rewards—fun, social connections, online reputations. But when the commons are sold or traded on Wall Street, the vast disparities between us, the peasants, and them, the lords, become more obvious and more objectionable.”




Computer scientist turned techno-skeptic Jaron Lanier has staked out the most extreme position in relation to those he calls the “lords of the computing clouds,” arguing that the only way to counteract this feudal structure is to institute a system of nano-payments, a market mechanism by which individuals are rewarded for every bit of private information gleaned by the network (an interesting thought experiment, Lanier’s proposed solution may well lead to worse outcomes than the situation we have now, due to the twisted incentives it entails).

New-media cheerleaders take a different view.


Consider the poet laureate of digital capitalism, Kevin Kelly, cofounder of Wired magazine and longtime technology commentator. It is not feudalism and exploitation that critics see, he argued in a widely circulated essay, but the emergence of a new cooperative ethos, a resurgence of collectivism—though not the kind your grandfather worried about. “The frantic global rush to connect everyone to everyone, all the time, is quietly giving rise to a revised version of socialism,” Kelly raves, pointing to sites like Wikipedia, YouTube, and Yelp.

Instead of gathering on collective farms, we gather in collective worlds. Instead of state factories, we have desktop factories connected to virtual co-ops. Instead of sharing drill bits, picks, and shovels, we share apps, scripts, and APIs. Instead of faceless politburos, we have faceless meritocracies, where the only thing that matters is getting things done. Instead of national production, we have peer production. Instead of government rations and subsidies, we have a bounty of free goods.

Kelly reassures his readers that the people who run this emerging economy are not left-wing in any traditional sense. They are “more likely to be libertarians than commie pinkos,” he explains. “Thus, digital socialism can be viewed as a third way that renders irrelevant the old debates,” transcending the conflict between “free-market individualism and centralized authority.” Behold, then, the majesty of digital communitarianism: it’s socialism without the state, without the working class, and, best of all, without having to share the wealth.

The sensational language is easy to mock, but this basic outlook is widespread among new-media enthusiasts. Attend any technology conference or read any book about social media or Web 2.0, whether by academics or business gurus, and the same conflation of communal spirit and capitalist spunk will be impressed upon you. The historian Fred Turner traces this phenomenon back to 1968, when a small band of California outsiders founded the WholeEarth Catalog and then, in 1985, the online community the Whole Earth ’Lectronic Link, the WELL, the prototype of online communities, and then Wired.

This group performed the remarkable feat of transforming computers from enablers of stodgy government administration to countercultural cutting edge, from implements of technocratic experts to machines that empower everyday people. They “reconfigured the status of information and information technologies,” Turner explains, by contending that these new tools would tear down bureaucracy, enhance individual consciousness, and help build a new collaborative society.


These prophets of the networked age—led by the WELL’s Stewart Brand and including Kelly and many other still-influential figures—moved effortlessly from the hacker fringe to the upper echelon of the Global Business Network, all while retaining their radical patina.

Thus, in 1984 Macintosh could run an ad picturing Karl Marx with the tagline, “It was about time a capitalist started a revolution”—and so it continues today. The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialism and privatization and inequality lurk beneath the surface.

This contradiction is captured in a single word: “open,” a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0 while being thankfully free of any socialist connotations. New-media thinkers have claimed openness as the appropriate utopian ideal for our time, and the concept has caught on. The term is now applied to everything from education to culture to politics and government. Broadly speaking, in tech circles, open systems—like the Internet itself—are always good, while closed systems—like the classic broadcast model—are bad. Open is Google and Wi-Fi, decentralization and entrepreneurialism, the United States and Wikipedia. Closed equals Hollywood and cable television, central planning and entrenched industry, China and the Encyclopaedia Britannica. However imprecisely the terms are applied, the dichotomy of open versus closed (sometimes presented as freedom versus control) provides the conceptual framework that increasingly underpins much of the current thinking about technology, media, and culture.

The fetish for openness can be traced back to the foundational myths of the Internet as a wild, uncontrollable realm. In 1996 John Perry Barlow, the former Grateful Dead lyricist and cattle ranger turned techno-utopian firebrand, released an influential manifesto, “A Declaration of the Independence of Cyberspace,” from Davos, Switzerland, during the World Economic Forum, the annual meeting of the world’s business elite. (“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone … You have no sovereignty where we gather.”) Almost twenty years later, these sentiments were echoed by Google’s Eric Schmidt and the State Department’s Jared Cohen, who partnered to write The New Digital Age: “The Internet is the largest experiment involving anarchy in history,” they insist. It is “the world’s largest ungoverned space,” one “not truly bound by terrestrial laws.”

While openness has many virtues, it is also undeniably ambiguous. Is open a means or an end? What is open and to whom? Mark Zuckerberg said he designed Facebook because he wanted to make the world more “open and connected,” but his company does everything it can to keep users within its confines and exclusively retains the data they emit. Yet this vagueness is hardly a surprise given the history of the term, which was originally imported from software production: the designation “open source” was invented to rebrand free software as business friendly, foregrounding efficiency and economic benefits (open as in open markets) over ethical concerns (the freedom of free software).


In keeping with this transformation, openness is often invoked in a way that evades discussions of ownership and equity, highlighting individual agency over commercial might and ignoring underlying power imbalances.

In the 2012 “open issue” of Google’s online magazine Think Quarterly, phrases like “open access to information” and “open for business” appear side by side, purposely blurring participation and profit seeking. One article on the way “smart brands” are adapting to the digital world insists that as a consequence of the open Web, “consumers have more power than ever,” while also outlining the ways “the web gives marketers a 24/7 focus group of the world,” unleashing a flood of “indispensable” data that inform “strategic planning and project development.” Both groups are supposedly “empowered” by new technology, but the first gets to comment on products while the latter boosts their bottom line.

By insisting that openness is the key to success, whether you are a multinational corporation or a lone individual, today’s digital gurus gloss over the difference between humans and businesses, ignoring the latter’s structural advantages: true, “open” markets in some ways serve consumers’ buying interests, but the more open people’s lives are, the more easily they can be tracked and exploited by private interests.


But as the technology writer Rob Horning has observed, “The connections between people are not uniformly reciprocal.” Some are positioned to make profitable use of what they glean from the network; others are more likely to be taken advantage of, giving up valuable information and reaping few benefits. “Networks,” Horning writes, “allow for co-optation as much as cooperation.”




Under the rubric of open versus closed, the paramount concern is access and whether people can utilize a resource or platform without seeking permission first. This is how Google and Wikipedia wind up in the same camp, even though one is a multibillion-dollar advertising-funded business and the other is supported by a nonprofit foundation. Both are considered “open” because they are accessible, even though they operate in very different ways. Given that we share noncommercial projects on commercial platforms all the time online, the distinction between commercial and noncommercial has been muddled; meanwhile “private” and “public” no longer refer to types of ownership but ways of being, a setting on a social media stream. This suits new-media partisans, who insist that the “old debates” between market and the state, capital and government, are officially behind us. “If communism vs. capitalism was the struggle of the twentieth century,” law professor and open culture activist Lawrence Lessig writes, “then control vs. freedom will be the debate of the twenty-first century.”




No doubt, there is much to be said for open systems, as many have shown elsewhere.


The heart of the Internet is arguably the end-to-end principle (the idea that the network should be kept as flexible, unrestricted, and open to a variety of potential uses as possible). From this principle to the freely shared technical protocols and code that Tim Berners-Lee used to create the World Wide Web, we have open standards to thank for the astonishing growth of the online public sphere and the fact that anyone can participate without seeking permission first.




Open standards, in general, foster a kind of productive chaos, encouraging innovation and invention, experimentation and engagement. But openness alone does not provide the blueprint for a more equitable social order, in part because the “freedom” promoted by the tech community almost always turns out to be of the Darwinian variety. Openness in this context is ultimately about promoting competition, not with protecting equality in any traditional sense; it has little to say about entrenched systems of economic privilege, labor rights, fairness, or income redistribution. Despite enthusiastic commentators and their hosannas to democratization, inequality is not exclusive to closed systems. Networks reflect and exacerbate imbalances of power as much as they improve them.

The tendency of open systems to amplify inequality—and new-media thinkers’ glib disregard for this fundamental characteristic—was on vivid display during a talk at a 2012 installment of the TEDGlobal conference convened under the heading “Radical Openness.” Don Tapscott, self-proclaimed “thought leader” and author of influential books including Growing Up Digital and Wikinomics, titled his presentation “Four Principles for the Open World”: collaboration, transparency, sharing, and empowerment.

Tapscott told the story of his neighbor Rob McEwen, a banker turned gold mine owner, the former chairman and CEO of Goldcorp Inc. When staff geologists couldn’t determine where the mineral deposits at one of his mines were located, McEwen turned to the Web, uploading data about the company’s property and offering a cash reward to anyone who helped them hit pay dirt. “He gets submissions from all around the world,” Tapscott explained. “They use techniques that he’s never heard of, and for his half a million dollars in prize money, Rob McEwen finds 3.4 billion dollars worth of gold. The market value of his company goes from 90 million to 10 billion dollars, and I can tell you, because he’s my neighbor, he’s a happy camper.”

This is Tapscott’s idea of openness in action: a banker-turned-CEO goes from rich to richer (of course, there was no mention of the workers in the mine and the wages they were paid for their effort, nor an acknowledgment of Goldcorp’s record of human rights violations).


For Tapscott, McEwen’s payoff is a sign of a bold new era, an “age of promise fulfilled and of peril unrequited,” to use his grandiloquent phrase. “And imagine, just consider this idea, if you would,” he concluded. “What if we could connect ourselves in this world through a vast network of air and glass? Could we go beyond just sharing information and knowledge? Could we start to share our intelligence?” The possibility of sharing any of the windfall generated as a consequence of this collective wisdom went unmentioned.

A similar willful obliviousness to the problems of open systems undercuts the claims of new-media thinkers that openness has buried the “old debates.” While Lawrence Lessig convincingly makes the case that bloated intellectual property laws—the controlling nature of copyright—often stifle creative innovation from below, his enthusiasm for the free circulation of information blinds him to the increasing commodification of our expressive lives and the economic disparity built into the system he passionately upholds.

“You can tell a great deal about the character of a person by asking him to pick the great companies of an era,” Lessig declares in Remix: Making Art and Commerce Thrive in the Hybrid Economy, and whether they root for the “successful dinosaurs” or the “hungry upstarts.” Technology, he continued, has “radically shifted” the balance of power in favor of the latter. Proof? “The dropouts of the late 1990s (mainly from Stanford) beat the dropouts of the middle 1970s (from Harvard). Google and Yahoo! were nothing when Microsoft was said to dominate.” This, it seems, is what it means to have moved beyond the dichotomy of market and state into the realm of openness—that we must cheerlead the newly powerful from the sidelines for no better reason than that they are new.

Even if the players weren’t from Stanford and Harvard (two institutions where Lessig has held prominent appointments), the statement would still be unsettling. Who could possibly construe a contest between the dropouts of these elite and storied institutions as one between underdogs and an oppressor? And why should we cheer Amazon over local bookstores, Apple over independent record labels, or Netflix over art house cinemas, on the basis of their founding date or their means of delivery? The dinosaurs and upstarts have more in common than Lessig cares to admit.

As Woodrow Wilson famously said, “That a peasant may become king does not render the kingdom democratic.” Although new-media celebrants claim to crusade on behalf of the “yeoman creator,” they treat the kings of the digital domain with unwavering reverence, the beneficence of their rule evident in the freedom that their platforms and services allow. Praising the development of what he calls “hybrid economies,” where sharing and selling coexist, Lessig argues that advances in advertising will provide adequate support for the creation and dissemination of culture in a digital age. “As if by an invisible hand,” the ways we access culture will dramatically change as the dinosaurs “fall to a better way of making money” via hyper-targeted marketing.

Lessig is deeply concerned about control of culture and appalled that a generation has been criminalized for downloading copyrighted content, yet he ignores the problem of commercialism and is sanguine about the prospect of these same youth being treated as products, their personal data available for a price.


Though the reviled traditional broadcast model evolved the way it did to serve the interests of advertisers, Internet enthusiasts brush away history’s warnings, confident that this time will be different.

Going against the grain of traditional media critics, Lessig and others believe that the problem is not commercialism of culture but control. The long-standing progressive critique of mass media identified the market as the primary obstacle to true cultural democracy. When General Electric acquired NBC, for example, the CEO assured shareholders that the news, a commodity just like “toasters, lightbulbs, or jet engines,” would be expected to make the same profit margin as any other division. But art and culture, the critical line of thought maintains, should be exempt, or at least shielded, from the revenue-maximizing mandates of Wall Street, lest vital forms of creativity shrivel up or become distorted by the stipulations of merchandising—an outlook that leads to advocating for regulations to break up conglomerates or for greater investment in public media.

Internet enthusiasts, in contrast, tend to take a laissez-faire approach: technology, unregulated and unencumbered, will allow everyone to compete in a truly open digital marketplace, resulting in a richer culture and more egalitarian society. Entertainment companies become the enemy only when they try to dictate how their products are consumed instead of letting people engage with them freely, recontextualizing and remixing popular artifacts, modifying and amending and feeding them back into the semiotic stream.

When all is said and done, the notion of a hybrid economy turns out to be nothing more than an upbeat version of digital sharecropping, a scenario in which all of us have the right to remix sounds and images and spread them through networks that profit from our every move. The vision of cultural democracy upheld by new-media thinkers has us all marinating in commercial culture, downloading it without fear of reprisal, repurposing fragments and uploading the results to pseudo-public spaces—the privately owned platforms that use our contributions for their own ends or sell our attention and information to advertisers. Under this kind of open system, everything we do gets swept back into a massive, interactive mash-up in the cloud, each bit parsed in the data mine, invisible value extracted by those who own the backend.

In a way, this is the epitome of what communications scholar Henry Jenkins calls “convergence culture”—the melding of old and new media that the telecom giants have long been looking forward to, for it portends a future where all activity flows through their pipes. But it also represents a broader blurring of boundaries: communal spirit and capitalist spunk, play and work, production and consumption, making and marketing, editorializing and advertising, participation and publicity, the commons and commerce. The “old rhetoric of opposition and co-optation” has been rendered obsolete, Jenkins assures us.


But if there is no opposition—no distinction between noncommercial and commercial, public and private, independent and mainstream—it is because co-optation has been absolute.

Though she now tours under her own name, the Portland-based musician Rebecca Gates long fronted the Spinanes, a band that, in the nineties and early aughts, released three albums on the influential Sub Pop label. She had, in many ways, the classic indie rock experience, playing clubs around the country, sleeping on couches, getting aired on college radio and MTV’s 120 Minutes. Sub Pop provided advances for the band to make records and tour support, and though the albums never sold enough copies to recoup, the label made it possible for Gates to devote herself to her craft. Then, after a hiatus of ten years, Gates finished a new record and went back on the road, but this time she self-released her music, taking advantage of the low cost of digital distribution. Gates was cautiously optimistic that she could end up better off than under the old model—that the enterprise may be more sustainable and satisfying—even if she sold fewer copies in the end.

Gates thought a lot about the new opportunities offered by technology as part of a project undertaken in partnership with the Future of Music Coalition, a nonprofit that advocates for the rights of independent artists, lobbying for everything from health care to community radio. She led an ambitious survey of working musicians to see how they had actually fared as the recording industry transforms. “It’s really easy to get hung up on success stories,” Gates told me, referencing appealing anecdotes about creators who “made it” by leaving their record labels and going viral online or by giving their music away and relying on touring income or T-shirt sales. Gates discovered it was hard to generalize about people’s experiences. “I’ve seen hard data for people who are in successful bands, quote unquote, festival headlining bands, who would make more money in a good retail job,” she said.

“There’s this myth that’s not quite a myth that you don’t need intermediaries anymore,” Gates continued. But it is harder than it seems for artists like Gates to bypass the giants and go solo, directing traffic to their own Web sites, though that’s what many artists would prefer to do. “Let’s imagine your record is done, that somehow you paid for production and you’re in the clear—then immediately you’re in a situation where you are dealing with iTunes, which takes thirty percent, and if you are small and you go through a brokerage, which you sometimes have to do, you can lose fifty percent.” Artists who do work with labels, big or small, often end up getting less from each digital sale.

A similar arrangement applies to streaming services such as Pandora and Spotify, which have come under fire from a range of working musicians for their paltry payouts. The four biggest major labels have an equity stake in Spotify and receive a higher royalty rate than the one paid to independent artists and labels (one independent songwriter calculated that it would take him 47,680 plays on Spotify to earn the profit of the sale of one LP


). “As far as I can tell, there’s been this replication of the old model,” Gates said. “There’s a large segment of the tech platforms that are simply a replacement for any sort of old label structures except that now they don’t give advances.”

During this crucial moment of cultural and economic restructuring, artists themselves have been curiously absent from a conversation dominated by executives, academics, and entrepreneurs. Conference after conference is held to discuss the intersection of music and new media, Gates notes, but working musicians are rarely onstage talking about their experiences or presenting their ideas, even as their work is used to lure audiences and establish lucrative ventures, not unlike the way books and CDs have long been sold as loss leaders at big chains to attract shoppers. The cultural field has become increasingly controlled by companies “whose sole contribution to the creative work,” to borrow Cory Doctorow’s biting expression, “is chaining children to factories in China and manufacturing skinny electronics” or developing the most sophisticated methods for selling our data to advertisers.

It wasn’t supposed to be this way. One natural consequence of Web-based technologies was supposed to be the elimination of middlemen, or “disintermediation.” “The great virtue of the Internet is that it erodes power,” the influential technologist Esther Dyson said. “It sucks power out of the center, and takes it to the periphery, it erodes the power of institutions over people while giving to individuals the power to run their lives.”


The problem, though, is that disintermediation has not lived up to its potential. Instead, it has facilitated the rise of a new generation of mediators that are sometimes difficult to see. As much as networked technology has dismantled and distributed power in more egalitarian ways, it has also extended and obscured power, making it less visible and, arguably, harder to resist.

The disruptive impact of the Web has been uneven at best. From one angle, power has been sucked to the periphery: new technologies have created space for geographically dispersed communities to coalesce, catalyzed new forms of activism and political engagement, and opened up previously unimaginable avenues for self-expression and exposure to art and ideas. That’s the story told again and again. But if we look from another angle and ask how, precisely, the power of institutions has been eroded, the picture becomes murkier.

Entrenched institutions have been strengthened in many ways. Thanks to digital technologies, Wall Street firms can trade derivatives at ever-faster rates, companies can inspect the private lives of prospective and current employees, insurance agencies have devised new methods to assess risky clients, political candidates can marshal big data to sway voters, and governments can survey the activities of citizens as never before. Corporate control—in media as in other spheres—is as secure as ever. In profound ways, power has been sucked in, not out.

In the realm of media and culture, the uncomfortable truth is that the information age has been accompanied by increasing consolidation and centralization, a process aided by the embrace of openness as a guiding ideal. While the old-media colossi may not appear to loom as large over our digital lives as they once did, they have hardly disappeared. Over the previous decade, legacy media companies have not fallen from the Fortune 500 firmament but have actually risen. In early 2013 they surprised analysts by reporting skyrocketing share prices: Disney and Time Warner were up 32 percent, CBS 40.2 percent, Comcast a shocking 57.6 percent.




These traditional gatekeepers have been joined by new online gateways, means of accessing information that cannot be avoided. A handful of Internet and technology companies have become as enormous and influential as the old leviathans: they now make up thirteen of the thirty largest publicly traded corporations in the United States.


The omnipresent Google, which, on an average day, accounts for approximately 25 percent of all North American consumer Internet traffic, has gobbled up over one hundred smaller firms, partly as a method of thwarting potential rivals, averaging about one acquisition a week since 2010; Facebook now has well over one billion users, or more than one in seven people on the planet; Amazon controls one-tenth of all American online commerce and its swiftly expanding cloud computing services host the data and traffic of hundreds of thousands of companies located in almost two hundred countries, an estimated one-third of all Internet users accessing Amazon’s cloud at least once a day; and Apple, which sits on almost $140 billion in cash reserves, jockeys with Exxon Mobil for the title of the most valuable company on earth, with a valuation exceeding the GDP (gross domestic product) of most nations.




Instead of leveling the field between small and large, the open Internet has dramatically tilted it in favor of the most massive players. Thus an independent musician like Rebecca Gates is squeezed from both sides. Off-line, local radio stations have been absorbed by Clear Channel and the major labels control more of the music market than they did before the Internet emerged. And online Gates has to position herself and her work on a monopolists’ platform or risk total invisibility.

Monopolies, contrary to early expectations, prosper online, where winner-take-all markets emerge partly as a consequence of Metcalfe’s law, which says that the value of a network increases exponentially by the number of connections or users: the more people have telephones or have social media profiles or use a search engine, the more valuable those services become. (Counterintuitively, given his outspoken libertarian views, PayPal founder and first Facebook investor Peter Thiel has declared competition overrated and praised monopolies for improving margins.


) What’s more, many of the emerging info-monopolies now dabble in hardware, software, and content, building their businesses at every possible level, vertically integrating as in the analog era.

This is the contradiction at the center of the new information system: the more customized and user friendly our computers and mobile devices are, the more connected we are to an extensive and opaque circuit of machines that coordinate and keep tabs on our activities; everything is accessible and individualized, but only through companies that control the network from the bottom up.


Amazon strives to control both the bookshelf and the book and everything in between. It makes devices, offers cloud computing services, and has begun to produce its own content, starting various publishing imprints before expanding to feature film production.


Google is taking a similar approach, having expanded from search into content, operating system design, retail, gadget manufacturing, robotics, “smart” appliances, self-driving cars, debit cards, and fiber broadband.

More troublingly, at least for those who believed the Internet upstarts would inevitably vanquish the establishment dinosaurs, are the ways the new and old players have melded. Condé Nast bought Reddit, Fox has a stake in Vice Media, Time Warner bet on Maker Studios (which is behind some of YouTube’s biggest stars), Apple works intimately with Hollywood and AT&T, Facebook joined forces with Microsoft and the major-label-backed Spotify, and Twitter is trumpeting its utility to television programmers. Google, in addition to cozying up to the phone companies that use its Android operating system, has struck partnership deals with entertainment companies including Disney, Paramount, ABC, 20th Century Fox, and Sony Pictures while making numerous overtures to network and cable executives in hopes of negotiating a paid online television service.




Google has licensing agreements with the big record companies for its music-streaming service and holds stake alongside Sony and Universal in Vevo, the music video site that is also the most viewed “channel” on YouTube.


YouTube has attempted to partly remake itself in television’s image, investing a small fortune in professionally produced Web series, opening studios for creators in New York, Los Angeles, and London, and seeking “brand safe” and celebrity-driven content to attract more advertising revenue.


“Top YouTube execs like to say they’re creating the next generation of cable TV, built and scaled for the web,” reports Ad Age. “But instead of 500-odd channels on TV, YouTube is making a play for the �next 10,000,’ appealing to all sorts of niches and interest groups.”




Though audiences may be smaller as a consequence of this fragmentation, they will be more engaged and more thoroughly monitored and marketed to than traditional television viewers.


As Lessig predicted, the “limitations of twentieth-century advertising” are indeed being overcome. As a consequence, the future being fashioned perpetuates and expands upon the defects of the earlier system instead of forging a new path.

Meanwhile, the captains of industry leading the charge toward mergers and acquisitions within the media sphere cynically invoke the Internet to justify their grand designs. Who can complain, they shrug, if one fellow owns a multibillion-dollar empire when anyone can start a Web site for next to nothing? The subject of antitrust investigations in Europe and the United States, Google executives respond to allegations that the company abuses its dominance in search to give its own services an advantage by insisting that on the Internet “competition is one click away.”

Such is Rupert Murdoch’s view of things as well. Not long before the phone-hacking scandal brought down his tabloid News of the World, Murdoch made a bid for BSkyB, a move that would have given him control of over half of the television market in the UK. He assured the British House of Lords that concerns about ownership and consolidation were “ten years out of date” given the abundance of news outlets for people to choose from online. The House of Lords, however, was not convinced, as a lengthy report to Parliament made clear: “We do not accept that the increase of news sources invalidates the case for special treatment of the media through ownership regulation. We believe that there is still a danger that if media ownership becomes too concentrated the diversity of voices available could be diminished.”




In the United States, however, even the core attribute of the Internet’s openness, so disingenuously deployed by the likes of Murdoch, is under threat. The nation’s leading cable lobbying group has a phalanx of full-time staff campaigning against Net neutrality—the idea that government regulation should ensure that the Internet stay an open platform, one where service providers cannot slow down or block certain Web sites to stifle competition or charge others a fee to speed up their traffic.

Ironically, the effort is headed by ex-FCC (Federal Communications Commission) chairman Michael Powell, who, in 2003, began his abdication of his role as public servant by publishing an op-ed in which he argued against government intervention in the media marketplace. “The bottomless well of information called the Internet” makes ownership rules simply unnecessary, a throwback to “the bygone era of black-and-white television,” Powell wrote, positively invoking the very attributes of the Internet he is now paid handsomely to undermine. (In 2013 the revolving door came full circle when Tom Wheeler became Chairman of the FCC; Wheeler once stood at the helm of the same lobbying organization Powell now presides over.)




Based on the principle of common carriage—rules first established under English common law and applied initially to things like canals, highways, and railroads and later to telegraph and telephone lines—advocates of Net neutrality seek to extend this tradition to our twenty-first-century communications system, prohibiting the owners of a network from abusing their power by discriminating against anyone’s data, whether by slowing or stopping it or charging more to speed it up. They hope to defend the openness of the Internet by securing federal regulation that would guarantee that all bits, no matter who is sending or receiving them, are treated equally. The images and text on your personal Web site, they maintain, should be delivered as swiftly as Amazon or CNN’s front page.

Telecom companies have something different in mind. AT&T, Verizon, Time Warner, Comcast, and others recognize that they could boost revenue significantly by charging for preferential service—adding a “fast lane” to the “information superhighway,” as critics have described their plan. Service providers, for example, could ban the services of rivals outright, decide to privilege content they own while throttling everything else, or start charging content providers to have their Web sites load faster, prioritizing those who pay the most—all three scenarios putting newcomers and independents at a substantial and potentially devastating disadvantage while favoring the already consolidated and well capitalized.

The Internet is best thought of as a series of layers: a physical layer, a code layer, and a content layer. The bottom “physical,” or ISP (Internet service provider) layer, is made up of the cables and routers through which our communications travel. In the middle is the “code” or “applications,” which consists of the protocols and software that make the lower layer run. On top of that is the “content,” the information we move across wires and airwaves and see on our screens. The telecommunications companies, which operate the physical layer, are fundamental to the entire enterprise. Common carriers—“mediating institutions” essential to social functioning—are sometimes called “public callings,” a term that underscores the responsibility that comes with such position and power.

In his insightful book The Master Switch, Tim Wu, originator of the term “Net neutrality,” explains why this may be the biggest media and communications policy battle ever waged. “While there were once distinct channels of telephony, television, radio, and film,” Wu writes, “all information forms are now destined to make their way increasingly along the master network that can support virtually any kind of data traffic.” Convergence has raised the stakes. “With every sort of political, social, cultural, and economic transaction having to one degree or another now gone digital, this proposes an awesome dependence on a single network, and no less vital need to preserve its openness from imperial designs,” Wu warns. “This time is different: with everything on one network, the potential power to control is so much greater.”

While we like to imagine the Internet as a radical, uncontrollable force—it’s often said the system was designed to survive a nuclear attack—it is in fact vulnerable to capture by the private interests we depend on for access. In 2010, rulings by the FCC based on a controversial proposal put forth by Verizon and Google established network neutrality on wired broadband but failed to extend the common carrier principle to wireless connections; in other words, network neutrality rules apply to the cable or DSL service you use at home but not to your cell phone. In 2013, Google showed further signs of weakening its resolve on the issue when it began to offer fiber broadband with advantageous terms of service that many observers found violate the spirit of Net neutrality.




Given the steady shift to mobile computing, including smartphones, tablets, and the emerging Internet-of-things (the fact that more and more objects, from buildings to cars to clothing, will be networked in coming years), the FCC’s 2010 ruling was already alarmingly insufficient when it was made. Nevertheless, telecommunications companies went on offense, with Verizon successfully challenging the FCC’s authority to regulate Internet access in federal appeals court in early 2014. But even as the rules were struck down, the judges acknowledged concerns that broadband providers represent a real threat, describing the kind of discriminatory behavior they were declaring lawful: companies might restrict “end-user subscribers’ ability to access the New York Times website” in order to “spike traffic” to their own news sources or “degrade the quality of the connection to a search website like Bing if a competitor like Google paid for prioritized access.”




Proponents of Net neutrality maintain that the FCC rules were in any case riddled with loopholes and the goal now is to ground open Internet rules and the FCC’s authority on firmer legal footing (namely by reclassifying broadband as a “telecommunications” and not an “information” service under Title II of the Communications Act, thereby automatically subjecting ISPS to common carrier obligations.) Opponents contend that Net neutrality would unduly burden telecom companies, which should have the right to dictate what travels through their pipes and charge accordingly, while paving the way for government control of the Internet. As a consequence of the high stakes, Net neutrality—a fight for the Internet as an open platform—has become a cause célèbre, and rightly so. However arcane the discussion may sometimes appear, the outcome of this battle will profoundly affect us all, and it is one worth fighting for.

Yet openness at the physical layer is not enough. While an open network ensures the equal treatment of all data—something undoubtedly essential for a democratic networked society—it does not sweep away all the problems of the old-media model, failing to adequately address the commercialization and consolidation of the digital sphere. We need to find other principles that can guide us, principles that better equip us to comprehend and confront the market’s role in shaping our media system, principles that help us rise to the unique challenge of bolstering cultural democracy in a digital era. Openness cannot protect us from, and can even perpetuate, the perils of a peasant’s kingdom.




2 (#ulink_b75739cf-276d-579e-b61e-24e33ad8e5f7)

FOR LOVE OR MONEY (#ulink_b75739cf-276d-579e-b61e-24e33ad8e5f7)


Not that many years ago, Laura Poitras was living in Yemen, alone, waiting. She had rented a house close to the home of Abu Jandal, Osama bin Laden’s former bodyguard and the man she hoped would be the subject of her next documentary. He put her off when she asked to film him, remaining frustratingly elusive. Next week, he’d tell her, next week, hoping the persistent American would just go away.

“I was going through hell,” Poitras said, sitting in her office a few months after the premiere of her movie The Oath, the second in her trilogy of documentaries about foreign policy and national security after September 11. “I just didn’t know if it was going to be two years, ten years, you know?” She waited, sure there was a story to be told and that it was extraordinary, but not sure if she’d be allowed to tell it. As those agonizing months dragged on, she did her best to be productive and pursued other leads. During Ramadan Poitras was invited to the house of a man just released from Guantánamo, whom she hoped to interview. “People almost had a heart attack that I was there,” Poitras recounts. “I didn’t film. I was shut down, and I was sat with the women. They were like, �Aren’t you afraid that they’re going to cut your head off?’”

Bit by bit Abu Jandal opened up. Poitras would go home with only three or four hours of footage, but what she caught on tape was good enough to keep her coming back, a dozen times in all. “I think it probably wasn’t until a year into it that I felt that I was going to get a film,” Poitras said. A year of waiting, patience, uprootedness, and uncertainty before she knew that her work would come to anything.

With the support of PBS and a variety of grants, The Oath took almost three years to make, including a solid year in the editing room. The film’s title speaks of two pledges: one made by Jandal and others in al-Qaeda’s inner circle promising loyalty to bin Laden and another made by an FBI agent named Ali Soufan, who interrogated Abu Jandal when he was captured by U.S. forces. “Soufan was able to extract information without using violence,” Poitras has said, and he testified to Congress against violent interrogation tactics. “One of his reasons is because he took an oath to the Constitution. In a broad sense, the film is about whether these men betrayed their loyalties to their oaths.”




“I always think, whenever I finish a film, that I would never have done that if I had known what it would cost emotionally, personally.” The emotional repercussions of disturbing encounters can be felt long after the danger has passed; romantic relationships are severed by distance; the future is perpetually uncertain. Poitras, however, wasn’t complaining. She experiences her work as a gift, a difficult process but a deeply satisfying one, and was already busy planning her next project, about the erosion of civil liberties in the wake of the war on terror.

In January 2013 she was contacted by an anonymous source that turned out to be Edward Snowden, the whistle-blower preparing to make public a trove of documents revealing the National Security Administration’s massive secret digital surveillance program. He had searched Poitras out, certain that she was someone who would understand the scope of the revelations and the need to proceed cautiously. Soon she was on a plane to Hong Kong to shoot an interview that would shake the world and in the middle of another film that would take her places she never could have predicted at the outset.




No simple formula explains the relationship between creative effort and output, nor does the quantity of time invested in a project correlate in any clear way to quality—quality being, of course, a slippery and subjective measure in itself. We can appreciate obvious skill, such as the labor of musicians who have devoted decades to becoming masters of their form, but it’s harder to assess work that is more subjective, more oblique, or less polished.

Complex creative labor—the dedicated application of human effort to some expressive end—continues despite technological innovation, stubbornly withstanding the demand for immediate production in an economy preoccupied with speed and cost cutting. We should hardly be surprised: aesthetic and communicative impulses are, by their very nature, indifferent to such priorities. A vase isn’t any more useful for being elaborately glazed. Likewise, a film is not necessarily any more informative for its demanding production qualities. We can’t reduce the contents of a novel to a summary of the plot, nor whittle down philosophical insight to a sound bite without something profound being lost along the way.

Cultural work, which is enhanced by the unpredictability of the human touch and the irregular rhythms of the imagination and intelligence, defies conventional measures of efficiency. Other trades were long ago deprived of this breathing room, the singular skill of the craftsperson automated away by the assembly line, much as the modern movement in architecture, to take one of many possible examples, has cut back on hand-finished flourishes in favor of standardized parts and designs.

For better or worse, machines continue to encroach on once protected territory. Consider the innovations aimed to optimize intrinsically creative processes—software engineered to translate texts, monitor the emotional tone of e-mails, perform research, recommend movies and books, “to make everything that’s implicit in a writer’s skill set explicit to a machine,” as an executive of one start-up describes its effort.


Algorithms designed to analyze and intensify the catchiness of songs are being used to help craft and identify potential Top 40 hits. These inventions, when coupled with steadily eroding economic support for arts and culture, underscore the fact that no human activity is immune to the relentless pressure to enlist technology to the cause of efficiency and increased productivity.




The problem isn’t with technology or efficiency, per se. Efficiency can be a remarkable thing, as in nature where nothing is wasted, including waste itself, which nurtures soil and plant and animal life. But the kind of efficiency to which techno-evangelists aspire emphasizes standardization, simplification, and speed, not diversity, complexity, and interdependence. And efficiency often masquerades as a technically neutral concept when it is in fact politically charged.

Instead of connoting the best use of scarce resources to attain a valued end, efficiency has become a code word promoting markets and competition over the public sphere, and profitability above all.


Music, author and engineer Christopher Steiner predicts in Automate This, will become more homogenized as executives increasingly employ bots to hunt for irresistible hooks. “Algorithms may bring us new artists, but because they build their judgment on what was popular in the past, we will likely end up with some of the same kind of forgettable pop we already have.”




There’s no denying the benefits the arts have reaped from technological innovation. Writing is a technology par excellence, one that initially aroused deep distrust and suspicion. Likewise, the book is a tool so finely honed to suit human need that we mistake it for something eternal and immutable.


Every musical instrument—from the acoustic guitar to the timpani to synthesizers—is a contrived contraption. Without advances in chemistry and optics we would have no photography; without turntables, no hip-hop. I owe my career as a documentarian to the advent of digital video. New inventions make unimaginable art possible. No doubt, with emerging technologies, we stand on the brink of expressive forms still inconceivable.

Nonetheless, the arts do not benefit from technological advancement in the way other industries do: a half century ago it took pretty much the same amount of time and labor to compose a novel, produce a play, or conduct an orchestra as it takes today. Even with the aid of a computer and access to digital archives, the task of researching and constructing, say, a historical narrative remains obstinately demanding. For filmmakers the costs of travel, payments to crew, and money to support time in the field and the editing room persist despite myriad helpful innovations. Technology may enable new expressive forms and distribution may be cheaper than in the past, but the process of making things remains, in many fundamental respects, unchanged. The arts, to use the language of cultural economics, depend on a type of labor input that cannot be replaced by new technologies and capital.

In the mid-sixties, two Princeton economists, William Baumol and William Bowen, made the groundbreaking argument that economic growth actually creates a “cost disease” where labor-intensive creative productions are concerned, the relative cost of the arts increasing in comparison to other manufactured goods. Baumol and Bowen’s analysis focused specifically on live performance, but their basic insight is applicable to any practice that demands human ingenuity and effort that cannot be made more efficient or eliminated through technological innovation. (Explaining Baumol and Bowen’s dilemma in the New Yorker, James Surowiecki notes that there are, in effect, two economies in existence, one that is becoming more productive while the other isn’t. In the first camp, we have the economy of computer manufacturing, carmakers, and Walmart bargains; in the second, the economy of undergraduate colleges, hair salons, auto repair, and the arts. “Cost disease isn’t anyone’s fault … It’s just endemic to businesses that are labor-intensive,” Surowiecki explains.)




To put it in the jargon proper to the economic analysis, the arts suffer from a “productivity lag,” where productivity is defined as physical output per work hour. Baumol and Bowen’s famous example is a string quartet: today it takes the same number of people the same amount of time to perform a composition by Mozart as it did in the 1800s, a fact that yields an exasperating flat line next to the skyward surge of something like computer manufacturing, which has seen productivity increases of 60 percent per year. “That the tendency for costs to rise and for prices to lag behind is neither a matter of bad luck nor mismanagement,” Baumol and Bowen explain in their seminal study. “Rather, it is an inescapable result of the technology of live performance, which will continue to contribute to the widening of the income gaps of the performing organizations.”

Analyzing the predicament faced by the labor-intensive arts, they hypothesized two cures to the cost disease. The first remedy was social subsidy, and in fact their work played an important role in energizing the push for increased funding for cultural institutions in the United States. The second cure was tied to a more general economic prediction, one infused with the optimism of the era. It may be the unfortunate fate of the arts to stagnate in terms of productivity growth, Baumol and Bowen maintained, but increased productivity in other sectors would help buoy creators. In their view, rising wages and—more important—an increase in free time would give the American people ample opportunities to create and enjoy art.




In a digital age, however, art and culture face a core contradiction, since copies can be made with the push of a button. Like the live performances Baumol and Bowen discuss, most creative endeavors have high fixed costs. While the hundredth or thousandth or millionth digital copy of Poitras’s first documentary, My Country, My Country, about a Sunni family trying to survive in war-torn Iraq, costs virtually nothing, the first copy cost her nearly four hundred thousand dollars.

When copies can be made and distributed across the globe in an instant, the logic of supply and demand pushes the price down to nothing. Yet when human imagination and exertion are essential to the creative process, the cost of cultural production only rises. It’s a paradox that cannot be wished away. Baumol and Bowen identified “an ever-increasing gap” between the operating costs of labor-intensive creative products and their earned income. In a digital economy, this gap becomes a yawning cavern.

To new-media utopians, monetary concerns are irrelevant. In recent years a bevy of popular technologists, scholars, and commentators have united to paint an appealing picture of a future where the cultural field, from entertainment to academia, is remade as a result of digital technologies that allow individuals to create and collaborate at no cost. Before the Internet, the story goes, people needed to be part of a massive bureaucracy and have a big budget to do something like make a movie. Now anyone with a mobile phone can shoot a video and upload it to a global distribution platform. Before the Internet, a small number of specialists were hired to compose an encyclopedia. Now volunteers scattered across the globe can create one more comprehensive than any the world has ever known. And so on.

An amateur paradise is upon us, a place where people are able to participate in cultural production for the pleasure of it, without asking permission first. Social media have enabled a new paradigm of collaboration. The old closed, hierarchical, institutional model is being replaced by a decentralized, networked system open to all. Barriers to entry have been removed, gatekeepers have been demolished, and the costs of creating and distributing culture have plummeted. New tools not only have made cultural production more efficient but have equalized opportunity.

NYU professor Clay Shirky, perhaps the leading proponent of this view, calls this process “social production.” Harvard’s Yochai Benkler uses the term “peer production,” business writer Jeff Howe calls it “crowdsourcing,” and Don Tapscott and his coauthor Anthony D. Williams say “wikinomics.” Whatever term they use, the commentators agree that a revolution is unfolding, with the potential to transform not just culture but also politics and the economy. They put social production on a pedestal, holding it up as more egalitarian, ethical, and efficient than the old model it is said to supersede.

Tapping the deep vein of American populism, new-media thinkers portray the amateur ethos flourishing online as a blow against the elitism and exclusivity of the professions, their claims to expertise and authority, and the organizations they depend on, and there’s something appealing about this view.


The professional class is not blameless by any means: it has erected often arbitrary barriers in the form of credentialing and licensing and has often failed to advance the public good while securing its own position.

The professions, as many others have observed, have served as a kind of “class fortress,” excluding talented, motivated people in service of monopolistic self-preservation. (“Institutions will try to preserve the problem to which they are the solution” is known in tech circles as the Shirky principle.) It is this aspect of professionalism that outrages Internet apostles, who celebrate the liberation from professionals who claim special knowledge and cheer the fact that authority is shifting from “faraway offices to the network of people we know, like, and respect.”




More far-reaching, mass amateurization is said to reveal something profound about human nature. Social media, enthusiasts contend, prove that long-dominant assumptions were wrong. The abundance of user-generated content, no matter how silly or derivative, reveals an intrinsic creative drive. While most of us probably didn’t need the Internet to show us that human beings share an irrepressible urge to create and share—an “art instinct”—for some this truism is a revelation.

It follows, by this logic, that if people are intrinsically motivated to produce culture, and technology enables them to act on this motivation effortlessly and affordably and without financial reward, then amateurs are less compromised than compensated professionals and thus superior. “Amateurs,” Shirky writes, “are sometimes separated from professionals by skill, but always by motivation; the term itself derives from the Latin amare—�to love.’ The essence of amateurism is intrinsic motivation: to be an amateur is to do something for the love of it.”

Making a similar case, Yochai Benkler likens cultural creation to blood drives: the quality of donations increases when organizers stop paying.


“Remember, money isn’t always the best motivator,” Benkler said, reiterating the point during a TED Talk touching on similar themes. “If you leave a fifty-dollar check after dinner with friends, you don’t increase the probability of being invited back. And if dinner isn’t entirely obvious, think of sex.”




So it won’t matter if some people’s operating costs end up exceeding their earned income. A well-received academic monograph about the impact of online file sharing on music production, published under the auspices of Harvard Business School, echoes these insights, allaying any suspicion one might have that lack of income could inhibit the world’s creative output. The authors argue that a decline in “industry profitability” won’t hurt production because artists’ unique motivations will keep them churning out music even if they are operating at a loss. “The remuneration of artistic talent differs from other types of labor in at least two important respects. On the one hand, artists often enjoy what they do, suggesting they might continue being creative even when the monetary incentives to do so become weaker. In addition, artists receive a significant portion of their remuneration not in monetary form.” To quote the professors, “many of them enjoy fame, admiration, social status, and free beer in bars.”




Another paper, published with the romantic title “Money Ruins Everything,” comes to a similar conclusion. Its authors, a team of social scientists, were stunned by what they found online: throngs of people who, instead of engaging in cost-benefit analysis, “produce content for the love of it, for the joy of expressing themselves, because it is fun, to demonstrate that they are better at it than others, or for a host of other non-commercial motivations.” The very existence of creators who “produce content for the love of it and are prepared to work for free—or even to lose money to feed their desire to create” upends traditional models of media production. If you want insight into the culture of the future, they say, just look at Wikipedia, the open source software community, and popular photo-sharing services. There are millions of people who contribute user-generated content without promise of remuneration or reward.

This distinction between love and money seems self-evident and uncomplicated. If the choice is between a powerful record mogul and a teenager uploading a video of himself singing in his bedroom, or the inanity of a high-grossing nightly cable news host versus some insightful commentary on a personal Web site, who wouldn’t side with the little person? But the distinction is deceptive. What sounds like idealism, upon further reflection, reveals itself to be the opposite. For one thing, it is deeply cynical to deny professionals any emotional investment in their work. Can we really argue that creative professionals—filmmakers, writers, architects, graphic designers, and so on—do not care deeply about what they do? And what about doctors, teachers, and scientists?

The corollary of Benkler’s and Shirky’s argument is that only those who despise their work deserve to be paid for their efforts.


It’s worth pointing out that these men—despite their enthusiasm for social production—release their books with conventional publishers and hold positions at elite academic institutions. Surely they do not believe their work as professional writers, researchers, and teachers is suspect because they were compensated. There is a note of truth in the idea that adversity fuels creativity, but when reduced to an economic truism—a decline in industry profitability won’t hurt artistic production because artists will work for beer—the notion rings not just hollow but obscene.

These tidily opposed categories of professional and amateur are ones into which few actually existing creative people perfectly fit. And the consequences of the digital upheaval are far more equivocal than the Shirkys and Benklers acknowledge. While the economics of the Web might apply to remixing memes or posting in online forums, the costs and risks associated with creative acts that require leaving one’s computer have hardly collapsed.

Where will this new paradigm leave projects like The Oath? Following Shirky’s logic, Laura Poitras is one of those professionals who should be overthrown by noble amateurs, her labor-intensive filmmaking process a throwback to another era, before creativity was a connected, collective process. The Internet might be a wonderful thing, but you can’t crowdsource a relationship with a terrorist or a whistle-blower.

Makers of art and culture have long straddled two economies, the economy of the gift and the economy of the market, as Lewis Hyde elegantly demonstrated in his book The Gift: Creativity and the Artist in the Modern World. Unlike other resources, Hyde explained, culture is passed from person to person, between whom it forms “feeling-bonds,” an initiation or preservation of affection. A simple purchase, on the other hand, forges no necessary connection, as any interaction at a cash register makes clear. Thus culture is a gift, a kind of glue, a covenant, but one that, unlike barter, obliges nothing in return. In other words, the fruits of creative effort exist to be shared. Yet the challenge is how to support this kind of work in a market-based society. “Invariably the money question comes up,” writes Hyde. “Labors such as mine are notoriously non-remunerative, and your landlord is not interested in your book of translations the day your rent comes due.”

The fate of creative people is to exist in two incommensurable realms of value and be torn between them—on one side, the purely economic activity associated with the straightforward selling of goods or labor; on the other, the fundamentally different, elevated form of value we associate with art and culture. It is this dilemma that led Baudelaire to ruefully proclaim that the “prostitution of the poet” was “an unavoidable necessity.”

Yet the challenge of maintaining oneself in a world of money is hardly a problem unique to the creatively inclined. This dilemma may not trouble those who choose to pursue wealth above all else, but most people seek work that feeds both the spirit and the belly. Likewise, the cultural realm is not the only sphere in which some essential part cannot be bought or sold. Teaching, therapy, medicine, science, architecture, design, even politics and law when practiced to serve the public good—certainly the gift operates within these fields as well. The gift can even be detected in supposedly menial jobs where people, in good faith, do far more than meager wages require of them. Creative people are not the only ones who struggle desperately to balance the contradictory demands of the gift and the market. But culture is the domain where this quandary is often most visible and acknowledged. Culture is one stage on which we play out our anxieties about the impact of market values on our inner lives. As we transition to a digital age, this anxiety is in full view.

The supposed conflict between amateurs and professionals sparked by the Internet speaks to a deep and long-standing confusion about the relationship between work and creativity in our society. Artists, we imagine, are grasshoppers, singing while ants slog away—or butterflies: delicate and flighty creatures who, stranded in a beehive, have the audacity to demand honey. No matter how exacting or extensive the effort a project requires, if the process allows for some measure of self-realization, it’s not unpleasant or self-sacrificing enough to fit our conception of work as drudgery. We tend to believe that the labor of those who appear to love what they do does not by definition qualify as labor.

We have succumbed, as the essayist Rebecca Solnit put it to me, to the “conventionalized notion of work as the forty hours of submission to another’s purpose snipped out of your life (and leaving a hole in your heart and mind).” Along the way we ignore the fact that many people, not only members of the vaunted “professional” class, love their jobs. “A lot of builders and firemen really enjoy themselves. Bakers and cooks can be pretty happy, and so can some farmers and fishermen.” Nor should we romanticize creative labor, she noted: “Most artists don’t love all parts of their work—I hate all the administration, the travel, the bad posture, the excess solitude, and the uncertainty about my own caliber and my future.”

In the 1951 classic White Collar, sociologist C. Wright Mills presented a powerful alternative to the stark dichotomies of amateurs versus professionals. Examining the emerging category of office worker, Mills advocated, instead, for what he called the Renaissance view of work, a process that would allow for not only the creation of objects but the development of the self—an act both mental and manual that “confesses and reveals” us to the world. The problem, as Mills saw it, was that development of the self was trivialized into “hobbies”—they were being amateurized, in other words—and so relegated to the lesser realm of leisure as opposed to the realm of legitimate labor.




“Each day men sell little pieces of themselves in order to try to buy them back each night and week end with the coin of fun,” wrote Mills, despairing of a cycle that splits us in two: an at-work self and an at-play self, the person who produces for money and the person who produces for love.


New-media thinkers believe social production and amateurism transcend the old problem of alienated labor by allowing us to work for love, not money, but in fact the unremunerated future they anticipate will only deepen a split that many desperately desire to reconcile.

Innovations and invention were expected to bring about humankind’s inevitable release from alienated labor. The economist John Maynard Keynes once predicted that the four-hour workday was close at hand and that technical improvements in manufacturing would allow ample time for people to focus on “the art of life itself.” Into the 1960s experts agonized over the possibility of a “crisis of leisure time,” which they prophesized would sweep the country—a crisis precipitated not for want of time off but by an excess of it.

In 1967, testimony before a Senate subcommittee indicated that “by 1985 people could be working just 22 hours a week or 27 weeks a year or could retire at 38.” Over the ensuing decades countless people have predicted that machines would facilitate the “end of work” by automating drudgery and freeing humans to perform labor they enjoy (“Let the robots take the jobs, and let them help us dream up new work that matters,” concludes one Wired cover story rehashing this old idea).




New-media thinkers do not pretend this future has come to pass, but in Cognitive Surplus Clay Shirky presents what can be read as a contemporary variation on this old theme, explaining how the cumulative free time of the world’s educated population—an estimated trillion hours a year—is being funneled into creative, collaborative projects online.


Time is something Shirky claims we have a growing abundance of thanks to two factors: steadily increasing prosperity and a decline of television viewing. The Web, he argues, challenges us to stop thinking of time as “individual minutes to be whiled away” and imagine it, instead, as a “social asset that can be harnessed.”




Projects like Wikipedia, message boards, and the latest viral memes are creative paradigms for a new age: entertaining, inclusive, easy to make, and efficient—the accumulation of tidbits of attention from thousands of people around the world. Much of the art and culture of the future, he wagers, will be produced in a similar manner, by pooling together spare moments spent online. Our efforts shall be aggregated, all the virtual crumbs combining to make a cake. Institutions will be supplanted as a consequence of the deployment of this surplus.




Shirky’s contributions reveal not how far we’ve progressed in pursuit of “the art of life” but how much ground has been lost since Keynes, how our sense of what’s possible has been circumscribed despite the development of new, networked wonders. Today’s popular visionary imagines us hunched over our computers with a few idle minutes to spare, our collective clicks supposed to substitute for what was once the promise of personal creative development—the freedom to think, feel, create, and act with the whole of one’s being.

In addition to other problematic aspects of his argument, Shirky’s two foundational assertions—that television watching is down and that free time has increased over recent decades—are both unfounded. Despite competition from the Internet, television viewing has generally risen over recent years, with the average American taking in nearly five hours of video each day, 98 percent through a traditional TV set. “Americans,” a 2012 Nielsen report states, “are not turning off.”




According to economists, with the exception of those who suffer from under- and unemployment, work hours have actually risen. Those lucky enough to be fully employed are, in fact, suffering from “time impoverishment.” Today the average citizen works longer hours for less money than he or she once did, putting in an extra four and a half weeks a year compared to 1979. Married couples with children are on the job an extra 413 hours, or an extra ten weeks a year, combined.


Adding salt to the wounds, the United States is the only industrialized nation where employers are not required by law to provide workers any paid vacation time.




The reason the prophecies of Mills and Keynes never came to pass is obvious but too often overlooked: new technologies do not emerge in a vacuum free of social, political, and economic influences. Context is all-important. On their own, labor-saving machines, however ingenious, are not enough to bring about a society of abundance and leisure, as the Luddites who destroyed the power looms set to replace them over two centuries ago knew all too well. If we want to see the fruits of technological innovation widely shared, it will require conscious effort and political struggle. Ultimately, outcomes are shaped as much by the capabilities of new technologies as by the wider circumstances in which they operate.

Baumol and Bowen, for example, made their rosy predictions against the backdrop of a social consensus now in tatters. When they wrote their report in the sixties, the prevailing economic orthodoxy said that both prosperity and risk should be broadly spread. Health care, housing, and higher education were more accessible to more people than they had ever been. Bolstered by a strong labor movement, unemployment was low and wages high by today’s standards. There was talk of shortened workweeks and guaranteed annual income for all. As a consequence of these conditions, men and women felt emboldened to demand more than just a stable, well-compensated job; they wanted work that was also engaging and gratifying.

In the fifties and sixties, this wish manifested in multiple ways, aiming at the status quo from within and without. First came books like The Organization Man and The Lonely Crowd, which voiced widespread anxieties about the erosion of individuality, inwardness, and agency within the modern workplace. Company men revolted against the “rat race.” Conformity was inveighed against, mindless acquiescence condemned, and affluence denounced as an anesthetic to authentic experience. Those who stood poised to inherit a gray flannel suit chafed against its constraints. By 1972 blue-collar workers were fed up, too, with wildcat strikers at auto factories protesting the monotony of the assembly line. The advances of technology did not, in the end, liberate the worker from drudgery but rather further empowered those who owned the machines. By the end of the 1970s, as former labor secretary Robert Reich explains,

a wave of new technologies (air cargo, container ships and terminals, satellite communications and, later, the Internet) had radically reduced the costs of outsourcing jobs abroad. Other new technologies (automated machinery, computers, and ever more sophisticated software applications) took over many other jobs (remember bank tellers? telephone operators? service station attendants?). By the ’80s, any job requiring that the same steps be performed repeatedly was disappearing—going over there or into software.




At the same time the ideal of a “postindustrial society” offered the alluring promise of work in a world in which goods were less important than services. Over time, phrases like “information economy,” “immaterial labor,” “knowledge workers,” and “creative class” slipped into everyday speech. Mental labor would replace the menial; stifling corporate conventions would give way to diversity and free expression; flexible employment would allow them to shape their own lives.

These prognostications, too, were not to be. Instead the increase of shareholder influence in the corporate sector accelerated the demand for ever-higher returns on investment and shorter turnaround. Dismissing stability as the refusal to innovate (or rather cut costs), business leaders cast aspersions on the steadying tenets of the first half of the twentieth century, including social provisions and job security. Instead of lifetime employment, the new system valorized adaptability, mobility, and risk; in the place of full-time employment, there were temporary contracts and freelance instability. In this context, the wish for expressive, worthwhile work, the desire to combine employment and purpose, took on a perverse form.

New-media thinkers, with their appetite for disintermediation and creative destruction, implicitly endorse and advance this transformation. The crumbling and hollowing out of established cultural institutions, from record labels to universities, and the liberation of individuals from their grip is a fantasy that animates discussions of amateurism. New technologies are hailed for enabling us to “organize without organizations,” which are condemned as rigid and suffocating and antithetical to the open architecture of the Internet.

However, past experience shows that the receding of institutions does not necessarily make space for a more authentic, egalitarian existence: if work and life have been made more flexible, people have also become unmoored, blown about by the winds of the market; if old hierarchies and divisions have been overthrown, the price has been greater economic inequality and instability; if the new system emphasizes potential and novelty, past achievement and experience have been discounted; if life has become less predictable and predetermined, it has also become more precarious as liability has shifted from business and government to the individual. It turns out that what we need is not to eliminate institutions but to reinvent them, to make them more democratic, accountable, inclusive, and just.

More than anyone else, urbanist Richard Florida, author of The Rise of the Creative Class, has built his career as a flag-bearer for the idea that individual ingenuity can fill the void left by declining institutions. Like new-media thinkers, with whom he shares a boundless admiration for all things high tech and Silicon Valley, he also shuns “organizational or institutional directives” while embracing the values meritocracy and openness. In Florida’s optimistic view, the demise of career stability has unbridled creativity and eliminated alienation in the workplace. “To some degree, Karl Marx had it partly right when he foresaw that the workers would someday control the means of production,” Florida declares. “This is now beginning to happen, although not as Marx thought it would, with the proletariat rising to take over factories. Rather, more workers than ever control the means of production, because it is inside their heads; they are the means of the production.”




Welcome to what Florida calls the “information-and-idea-based economy,” a place where “people have come to accept that they’re on their own—that the traditional sources of security and entitlement no longer exist, or even matter.” Where earlier visionaries prophesied a world in which increased leisure allowed all human beings the well-being and security to freely cultivate their creative instincts, the apostles of the creative class collapse labor into leisure and exploitation into self-expression, and they arrogate creativity to serve corporate ends.

“Capitalism has also expanded its reach to capture the talents of heretofore excluded groups of eccentrics and nonconformists,” Florida writes. “In doing so, it has pulled off yet another astonishing mutation: taking people who would once have been bizarre mavericks operating at the bohemian fringe and setting them at the very heart of the process of innovation and economic growth.” According to Florida’s theory, the more creative types colorfully dot an urban landscape, the greater a city’s “Bohemian Index” and the higher the likelihood of the city’s economic success.

It’s all part of what he calls the “Big Morph”—“the resolution of the centuries-old tension between two value systems: the Protestant work ethic and the Bohemian ethic” into a new “creative ethos.” The Protestant ethic treats work as a duty; the Bohemian ethic, he says, is hedonistic. Profit seeking and pleasure seeking have united, the industrialist and the bon vivant have become one. “Highbrow and lowbrow, alternative and mainstream, work and play, CEO and hipster are all morphing together today,” Florida enthuses.




What kind of labor is it, exactly, that people will perform in this inspired Shangri-la? Florida’s popular essays point the way: he applauds a “teenage sales rep re-conceiving a Vonage display” as a stunning example of creative ingenuity harnessed for economic success; later he announces, anecdotally, that an “overwhelming” number of students would prefer to work “lower-paying temporary jobs in a hair salon” than “good, high-paying jobs in a machine tool factory.” Cosmetology is “more psychologically rewarding, creative work,” he explains.




It’s tempting to dismiss such a broad definition of creativity as out of touch, but Florida’s declarations illuminate an important trend and one that helped set the terms for the ascension of amateurism. It is not that creative work has suddenly become abundant, as Florida would have us believe; we have not all become Mozarts on the floor of some big-box store, Frida Kahlos at the hair salon. Rather, the point is that the psychology of creativity has become increasingly useful to the economy. The disposition of the artist is ever more in demand. The ethos of the autonomous creator has been repurposed to serve as a seductive facade for a capricious system and adopted as an identity by those who are trying to make their way within it.

Thus the ideal worker matches the traditional profile of the enthusiastic virtuoso: an individual who is versatile and rootless, inventive and adaptable; who self-motivates and works long hours, tapping internal and external resources; who is open to reinvention, emphasizing potential and promise opposed to past achievements; one who loves the work so much, he or she would do it no matter what, and so expects little compensation or commitment in return—amateurs and interns, for example.

The “free” credo promoted by writers such as Chris Anderson and other new-media thinkers has helped lodge a now rung on an ever-lengthening educational and career ladder, the now obligatory internship. Like artists and culture makers of all stripes, interns are said to be “entrepreneurs” and “free agents” investing in their “personal brands.” “The position of interns is not unlike that of many young journalists, musicians, and filmmakers who are now expected to do online work for no pay as a way to boost their portfolios,” writes Ross Perlin, author of the excellent book Intern Nation. “If getting attention and building a reputation online are often seen as more valuable than immediate �monetization,’ the same theory is being propounded for internships in the analog world—with exposure, contacts, and references advanced as the prerequisite, or even plausible alternative, to making money.”




As Perlin documents in vivid detail, capitalizing on desperate résumé-building college students and postgraduates exacerbates inequality. Who can afford to take a job that doesn’t pay but the relatively well off? Those who lack financial means are either shut out of opportunities or forced to support themselves with loans, going into debt for the privilege of working for free.

Creativity is invoked time and again to justify low wages and job insecurity. Across all sectors of the economy, responsibility for socially valuable work, from journalism to teaching and beyond, is being off-loaded onto individuals as institutions retreat from obligations to support efforts that aren’t immediately or immensely profitable. The Chronicle of Higher Education urges graduate students to imagine themselves as artists, to better prepare for the possibility of impoverishment when tenure-track jobs fail to materialize: “We must think of graduate school as more like choosing to go to New York to become a painter or deciding to travel to Hollywood to become an actor. Those arts-based careers have always married hope and desperation into a tense relationship.”


In a similar vein, NPR reports that the “temp-worker lifestyle” is a kind of “performance art,” a statement that conjures a fearless entertainer mid-tightrope or an acrobat hurling toward the next trapeze without a safety net—a thrilling image, especially to employers who would prefer not to provide benefits.




The romantic stereotype of the struggling artist is familiar to the musician Marc Ribot, a legendary figure on the New York jazz scene who has worked with Marianne Faithfull, Elvis Costello, John Zorn, Tom Waits, Alison Krauss, Robert Plant, and even Elton John. Ribot tells me he had an epiphany watching a “great but lousy” made-for-TV movie about Apple computers. As he tells it, two exhausted employees are complaining about working eighteen-hour days with no weekends when an actor playing Steve Jobs tells them to suck it up—they’re not regular workers at a stodgy company like IBM but artists.

“In other words art was the new model for this form of labor,” Ribot says, explaining his insight. “The model they chose is musicians, like Bruce Springsteen staying up all night to get that perfect track. Their life does not resemble their parents’ life working at IBM from nine to five, and certainly doesn’t resemble their parents’ pay structures—it’s all back end, no front end. All transfer of risk to the worker.” (In 2011 Apple Store workers upset over pay disparities were told, “Money shouldn’t be an issue when you’re employed at Apple. Working at Apple should be viewed as an experience.”)




In Ribot’s field this means the more uncertain part of the business—the actual writing, recording, and promoting of music—is increasingly “outsourced” to individuals while big companies dominate arenas that are more likely to be profitable, like concert sales and distribution (Ticketmaster, Amazon, iTunes, and Google Play, none of which invests in music but reaps rewards from its release). “That technological change is upon us is undeniable and irreversible,” Ribot wrote about the challenges musicians face as a consequence of digitization. “It will probably not spell the end of music as a commodity, although it may change drastically who is profiting off whose music. Whether these changes will create a positive future for producers or consumers of music depends on whether musicians can organize the legal and collective struggle necessary to ensure that those who profit off music in any form pay the people who make it.”

Ribot quotes John Lennon: “You think you’re so clever and classless and free.” Americans in general like to think of themselves as having transcended economic categories and hierarchies, Ribot says, and artists are no exception. During the Great Depression artists briefly began to think of themselves as workers and to organize as such, amassing social and political power with some success, but today it’s more popular to speak of artists as entrepreneurs or brands, designations that further obscure the issue of labor and exploitation by comparing individual artists to corporate entities or sole proprietors of small businesses.

If artists are fortunate enough to earn money from their art, they tend to receive percentages, fees, or royalties rather than wages; they play “gigs” or do “projects” rather than hold steady jobs, which means they don’t recognize the standard breakdowns of boss and worker. They also spend a lot of time on the road, not rooted in one place; hence they are not able to organize and advocate for their rights.

What’s missing, as Ribot sees it, is a way to understand how the economy has evolved away from the old industrial model and how value is extracted within the new order. “I think that people, not just musicians, need to do an analysis so they stop asking the question, �Who is my legal employer?’ and start asking, �Who works, who creates things that people need, and who profits from it?’” These questions, Ribot wagers, could be the first step to understanding the model of freelance, flexible labor that has become increasingly dominant across all sectors of the economy, not just in creative fields.

We are told that a war is being waged between the decaying institutions of the off-line world and emerging digital dynamos, between closed industrial systems and open networked ones, between professionals who cling to the past and amateurs who represent the future. The cheerleaders of technological disruption are not alone in their hyperbole. Champions of the old order also talk in terms that reinforce a seemingly unbridgeable divide.

Unpaid amateurs have been likened to monkeys with typewriters, gate-crashing the cultural conversation without having been vetted by an official credentialing authority or given the approval of an established institution. “The professional is being replaced by the amateur, the lexicographer by the layperson, the Harvard professor by the unschooled populace,” according to Andrew Keen, obstinately oblivious to the failings of professionally produced mass culture he defends.

The Internet is decried as a province of know-nothing narcissists motivated by a juvenile desire for fame and fortune, a virtual backwater of vulgarity and phoniness. Jaron Lanier, the technologist turned skeptic, has taken aim at what he calls “digital Maoism” and the ascendance of the “hive mind.” Social media, as Lanier sees it, demean rather than elevate us, emphasizing the machine over the human, the crowd over the individual, the partial over the integral. The problem is not just that Web 2.0 erodes professionalism but, more fundamentally, that it threatens originality and autonomy.

Outrage has taken hold on both sides. But the lines in the sand are not as neatly drawn as the two camps maintain. Wikipedia, considered the ultimate example of amateur triumph as well as the cause of endless hand-wringing, hardly hails the “death of the expert” (the common claim by both those who love the site and those who despise it). While it is true that anyone can contribute to the encyclopedia, their entries must have references, and many of the sources referenced qualify as professional. Most entries boast citations of academic articles, traditional books, and news stories. Similarly, social production does not exist quite outside the mainstream. Up to 85 percent of the open source Linux developers said to be paradigmatic of this new age of volunteerism are, in fact, employees of large corporations that depend on nonproprietary software.




More generally, there is little evidence that the Internet has precipitated a mass rejection of more traditionally produced fare. What we are witnessing is a convergence, not a coup. Peer-to-peer sites—estimated to take up half the Internet’s bandwidth—are overwhelmingly used to distribute traditional commercial content, namely mainstream movies and music. People gather on message boards to comment on their favorite television shows, which they download or stream online. The most popular videos on YouTube, year after year, are the product of conglomerate record labels, not bedroom inventions. Some of the most visited sites are corporate productions like CNN. Most links circulated on social media are professionally produced. The challenge is to understand how power and influence are distributed within this mongrel space where professional and amateur combine.

Consider, for a moment, Clay Shirky, whose back-flap biography boasts corporate consulting gigs with Nokia, News Corp, BP, the U.S. Navy, Lego, and others. Shirky embodies the strange mix of technological utopianism and business opportunism common to many Internet entrepreneurs and commentators, a combination of populist rhetoric and unrepentant commercialism. Many of amateurism’s loudest advocates are also business apologists, claiming to promote cultural democracy while actually advising corporations on how to seize “collaboration and self-organization as powerful new levers to cut costs” in order to “discover the true dividends of collective capability and genius” and “usher their organizations into the twenty-first century.”




The grassroots rhetoric of networked amateurism has been harnessed to corporate strategy, continuing a nefarious tradition. Since the 1970s populist outrage has been yoked to free-market ideology by those who exploit cultural grievances to shore up their power and influence, directing public animus away from economic elites and toward cultural ones, away from plutocrats and toward professionals. But it doesn’t follow that criticizing “professionals” or “experts” or “cultural elites” means that we are striking a blow against the real powers; and when we uphold amateur creativity, we are not necessarily resolving the deeper problems of entrenched privilege or the irresistible imperative of profit. Where online platforms are concerned, our digital pastimes can sometimes promote positive social change and sometimes hasten the transfer of wealth to Silicon Valley billionaires.

Even well-intentioned celebration of networked amateurism has the potential to obscure the way money still circulates. That’s the problem with PressPausePlay, a slick documentary about the digital revolution that premiered at a leading American film festival. The directors examine the ways new tools have sparked a creative overhaul by allowing everyone to participate—or at least everyone who owns the latest Apple products. That many of the liberated media makers featured in the movie turn out to work in advertising and promotion, like celebrity business writer Seth Godin, who boasts of his ability to turn his books into bestsellers by harnessing the power of the Web, underscores how the hype around the cultural upheaval sparked by connective technologies easily slides from making to marketing. While the filmmakers pay tribute to DIY principles and praise the empowering potential of digital tools unavailable a decade ago, they make little mention of the fact that the telecommunications giant Ericsson provided half of the movie’s seven-hundred-thousand-dollar budget and promotional support.




We should be skeptical of the narrative of democratization by technology alone. The promotion of Internet-enabled amateurism is a lazy substitute for real equality of opportunity. More deeply, it’s a symptom of the retreat over the past half century from the ideals of meaningful work, free time, and shared prosperity—an agenda that entailed enlisting technological innovation for the welfare of each person, not just the enrichment of the few.

Instead of devising truly liberating ways to harness machines to remake the economy, whether by designing satisfying jobs or through the social provision of a basic income to everyone regardless of work status, we have Amazon employees toiling on the warehouse floor for eleven dollars an hour and Google contract workers who get fired after a year so they don’t have to be brought on full-time. Cutting-edge new-media companies valued in the tens of billions retain employees numbering in the lowly thousands, and everyone else is out of luck. At the same time, they hoard their record-setting profits, sitting on mountains of cash instead of investing it in ways that would benefit us all.

The zeal for amateurism looks less emancipatory—as much necessity as choice—when you consider the crisis of rising educational costs, indebtedness, and high unemployment, all while the top 1 percent captures an ever-growing portion of the surplus generated by increased productivity. (Though productivity has risen 23 percent since 2000, real hourly pay has effectively stagnated.)


The consequences are particularly stark for young people: between 1984 and 2009, the median net worth for householders under thirty-five was down 68 percent while rising 42 percent for those over sixty-five.


Many are delaying starting families of their own and moving back in with Mom and Dad.

Our society’s increasing dependence on free labor—online and off—is immoral in this light. The celebration of networked amateurism—and of social production and the cognitive surplus—glosses over the question of who benefits from our uncompensated participation online. Though some internships are enjoyable and useful, the real beneficiary of this arrangement is corporate America, which reaps the equivalent of a two-billion-dollar annual subsidy.


And many of the digital platforms to which we contribute are highly profitable entities, run not for love but for money.

Creative people have historically been encouraged to ignore economic issues and maintain indifference to matters like money and salaries. Many of us believe that art and culture should not succumb to the dictates of the market, and one way to do this is to act as though the market doesn’t exist, to devise a shield to deflect its distorting influence, and uphold the lack of compensation as virtuous. This stance can provide vital breathing room, but it can also perpetuate inequality. “I consistently come across people valiantly trying to defy an economic class into which they were born,” Richard Florida writes. “This is particularly true of the young descendants of the truly wealthy—the capitalist class—who frequently describe themselves as just �ordinary’ creative people working on music, film or intellectual endeavors of one sort or another.”

How valiant to deny the importance of money when it is had in abundance. “Economic power is first and foremost a power to keep necessity at arm’s length,” the French sociologist Pierre Bourdieu observed. Especially, it seems, the necessity of talking honestly about economics.

Those who applaud social production and networked amateurism, the colorful cacophony that is the Internet, and the creative capacities of everyday people to produce entertaining and enlightening things online, are right to marvel. There is amazing inventiveness, boundless talent and ability, and overwhelming generosity on display. Where they go wrong is in thinking that the Internet is an egalitarian, let alone revolutionary, platform for our self-expression and development, that being able to shout into the digital torrent is adequate for democracy.

The struggle between amateurs and professionals is, fundamentally, a distraction. The tragedy for all of us is that we find ourselves in a world where the qualities that define professional work—stability, social purpose, autonomy, and intrinsic and extrinsic rewards—are scarce. “In part, the blame falls on the corporate elite,” Barbara Ehrenreich wrote back in 1989, “which demands ever more bankers and lawyers, on the one hand, and low-paid helots on the other.” These low-paid helots are now unpaid interns and networked amateurs. The rub is that over the intervening years we have somehow deceived ourselves into believing that this state of insecurity and inequity is a form of liberation.




3 (#ulink_04c6f9b4-1ac3-53aa-a231-4ae5bd241394)

WHAT WE WANT (#ulink_04c6f9b4-1ac3-53aa-a231-4ae5bd241394)


Today it is standard wisdom that a whole new kind of person lives in our midst, the digital native—“2.0 people” as the novelist Zadie Smith dubbed them. Exalted by techno-enthusiasts for being hyper-connected and sociable, technically savvy and novelty seeking—and chastised by techno-skeptics for those very same traits—this new generation and its predecessors are supposedly separated by a gulf that is immense and unbroachable. Self-appointed experts tell us that “today’s students are no longer the people our educational system was designed to teach”; they “experience friendship” and “relate to information differently” than all who came before.




Reflecting on this strange new species, the skeptics are inclined to agree. “The cyber-revolution is bringing about a different magnitude of change, one that marks a massive discontinuity,” warns the literary critic Sven Birkerts. “Pre-Digital Man has more in common with his counterpart in the agora than he will with a Digital Native of the year 2050.” It is not just cultural or social references that divide the natives from their pre-digital counterparts, but “core phenomenological understandings.” Their very modes of perception and sense making, of experiencing the world and interpreting it, Birkerts claims, are simply incomprehensible to their elders. They are different creatures altogether.




The tech-enthusiasts make a similarly extreme case for total generational divergence, idolizing digital natives with fervor and ebullience equal and opposite to Birkerts’s unease. These natives, born and raised in networked waters, surf shamelessly, with no need for privacy or solitude. As described by Nick Bilton in his book I Live in the Future and Here’s How It Works, digital natives prefer media in “bytes” and “snacks” as opposed to full “meals”—defined as the sort of lengthy article one might find in the New Yorker magazine. Digital natives believe “immediacy trumps quality.”




They “unabashedly create and share content—any type of content,” and, unlike digital immigrants, they never suffer from information overload. People who have grown up online also do not read the news. Or rather, we are told, for them the news is whatever their friends deem interesting, not what some organization or authoritative source says is significant. “This is the way I navigate today as well,” Bilton, technology writer for the New York Times, proudly declares. “If the news is important, it will find me.”


(Notably, Bilton’s assertion was contradicted by a Harvard study that found eighteen- to twenty-nine-year-olds still prefer to get their political news from established newspapers, print or digital, than from the social media streams of their friends.)







Конец ознакомительного фрагмента.


Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/astra-taylor/the-people-s-platform-taking-back-power-and-culture-in-the-di/) на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.



Если текст книги отсутствует, перейдите по ссылке

Возможные причины отсутствия книги:
1. Книга снята с продаж по просьбе правообладателя
2. Книга ещё не поступила в продажу и пока недоступна для чтения

Навигация